I1225 12:56:10.730939 9 e2e.go:243] Starting e2e run "4f1071a8-5753-45ec-9db3-dacd05d6ae4a" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1577278569 - Will randomize all specs Will run 215 of 4412 specs Dec 25 12:56:10.951: INFO: >>> kubeConfig: /root/.kube/config Dec 25 12:56:10.955: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Dec 25 12:56:10.980: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Dec 25 12:56:11.012: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Dec 25 12:56:11.012: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Dec 25 12:56:11.012: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Dec 25 12:56:11.023: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Dec 25 12:56:11.023: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Dec 25 12:56:11.023: INFO: e2e test version: v1.15.7 Dec 25 12:56:11.024: INFO: kube-apiserver version: v1.15.1 SSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 12:56:11.024: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook Dec 25 12:56:11.067: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Dec 25 12:56:35.318: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 25 12:56:35.322: INFO: Pod pod-with-poststart-http-hook still exists Dec 25 12:56:37.323: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 25 12:56:37.333: INFO: Pod pod-with-poststart-http-hook still exists Dec 25 12:56:39.323: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 25 12:56:39.330: INFO: Pod pod-with-poststart-http-hook still exists Dec 25 12:56:41.323: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 25 12:56:41.333: INFO: Pod pod-with-poststart-http-hook still exists Dec 25 12:56:43.323: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 25 12:56:43.377: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 12:56:43.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6387" for this suite. Dec 25 12:57:05.439: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 12:57:05.558: INFO: namespace container-lifecycle-hook-6387 deletion completed in 22.171187797s • [SLOW TEST:54.535 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 12:57:05.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210 STEP: creating the pod Dec 25 12:57:05.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5451' Dec 25 12:57:08.353: INFO: stderr: "" Dec 25 12:57:08.353: INFO: stdout: "pod/pause created\n" Dec 25 12:57:08.353: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Dec 25 12:57:08.354: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-5451" to be "running and ready" Dec 25 12:57:08.364: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 10.064857ms Dec 25 12:57:10.371: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017066734s Dec 25 12:57:12.389: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035375065s Dec 25 12:57:14.401: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047699315s Dec 25 12:57:16.441: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 8.087454224s Dec 25 12:57:16.441: INFO: Pod "pause" satisfied condition "running and ready" Dec 25 12:57:16.441: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: adding the label testing-label with value testing-label-value to a pod Dec 25 12:57:16.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-5451' Dec 25 12:57:16.629: INFO: stderr: "" Dec 25 12:57:16.630: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Dec 25 12:57:16.630: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-5451' Dec 25 12:57:16.706: INFO: stderr: "" Dec 25 12:57:16.706: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 8s testing-label-value\n" STEP: removing the label testing-label of a pod Dec 25 12:57:16.706: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-5451' Dec 25 12:57:16.826: INFO: stderr: "" Dec 25 12:57:16.826: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Dec 25 12:57:16.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-5451' Dec 25 12:57:16.919: INFO: stderr: "" Dec 25 12:57:16.919: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 8s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217 STEP: using delete to clean up resources Dec 25 12:57:16.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5451' Dec 25 12:57:17.019: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 25 12:57:17.019: INFO: stdout: "pod \"pause\" force deleted\n" Dec 25 12:57:17.020: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-5451' Dec 25 12:57:17.144: INFO: stderr: "No resources found.\n" Dec 25 12:57:17.144: INFO: stdout: "" Dec 25 12:57:17.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-5451 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Dec 25 12:57:17.223: INFO: stderr: "" Dec 25 12:57:17.223: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 12:57:17.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5451" for this suite. Dec 25 12:57:23.249: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 12:57:23.394: INFO: namespace kubectl-5451 deletion completed in 6.166120142s • [SLOW TEST:17.835 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 12:57:23.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-38e1c409-8749-4f8a-a45f-d7f0133d2b81 STEP: Creating secret with name s-test-opt-upd-00ab01f4-4892-4e77-91e4-247701e7f7eb STEP: Creating the pod STEP: Deleting secret s-test-opt-del-38e1c409-8749-4f8a-a45f-d7f0133d2b81 STEP: Updating secret s-test-opt-upd-00ab01f4-4892-4e77-91e4-247701e7f7eb STEP: Creating secret with name s-test-opt-create-c9ee56fa-a14d-4d1a-966c-3615cb506423 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 12:58:54.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2749" for this suite. Dec 25 12:59:34.665: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 12:59:34.795: INFO: namespace projected-2749 deletion completed in 40.156955522s • [SLOW TEST:131.400 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 12:59:34.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Dec 25 12:59:45.510: INFO: Successfully updated pod "pod-update-474eb105-6c0a-4289-9018-6beef84a853d" STEP: verifying the updated pod is in kubernetes Dec 25 12:59:45.530: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 12:59:45.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6062" for this suite. Dec 25 13:00:07.571: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:00:07.655: INFO: namespace pods-6062 deletion completed in 22.119984055s • [SLOW TEST:32.860 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:00:07.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test env composition Dec 25 13:00:07.749: INFO: Waiting up to 5m0s for pod "var-expansion-4ca1567a-d7a1-4aa8-a11a-658cba62500f" in namespace "var-expansion-3796" to be "success or failure" Dec 25 13:00:07.894: INFO: Pod "var-expansion-4ca1567a-d7a1-4aa8-a11a-658cba62500f": Phase="Pending", Reason="", readiness=false. Elapsed: 144.161288ms Dec 25 13:00:09.907: INFO: Pod "var-expansion-4ca1567a-d7a1-4aa8-a11a-658cba62500f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.157864132s Dec 25 13:00:11.916: INFO: Pod "var-expansion-4ca1567a-d7a1-4aa8-a11a-658cba62500f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.166366486s Dec 25 13:00:13.924: INFO: Pod "var-expansion-4ca1567a-d7a1-4aa8-a11a-658cba62500f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.174151733s Dec 25 13:00:15.944: INFO: Pod "var-expansion-4ca1567a-d7a1-4aa8-a11a-658cba62500f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.194490581s Dec 25 13:00:17.994: INFO: Pod "var-expansion-4ca1567a-d7a1-4aa8-a11a-658cba62500f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.244713882s STEP: Saw pod success Dec 25 13:00:17.994: INFO: Pod "var-expansion-4ca1567a-d7a1-4aa8-a11a-658cba62500f" satisfied condition "success or failure" Dec 25 13:00:18.006: INFO: Trying to get logs from node iruya-node pod var-expansion-4ca1567a-d7a1-4aa8-a11a-658cba62500f container dapi-container: STEP: delete the pod Dec 25 13:00:18.201: INFO: Waiting for pod var-expansion-4ca1567a-d7a1-4aa8-a11a-658cba62500f to disappear Dec 25 13:00:18.211: INFO: Pod var-expansion-4ca1567a-d7a1-4aa8-a11a-658cba62500f no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:00:18.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3796" for this suite. Dec 25 13:00:24.293: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:00:24.437: INFO: namespace var-expansion-3796 deletion completed in 6.220092839s • [SLOW TEST:16.781 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:00:24.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:00:34.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7045" for this suite. Dec 25 13:01:36.706: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:01:36.866: INFO: namespace kubelet-test-7045 deletion completed in 1m2.182461995s • [SLOW TEST:72.428 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:01:36.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-7409 STEP: creating a selector STEP: Creating the service pods in kubernetes Dec 25 13:01:36.999: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Dec 25 13:02:19.183: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.1 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7409 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 25 13:02:19.183: INFO: >>> kubeConfig: /root/.kube/config Dec 25 13:02:20.800: INFO: Found all expected endpoints: [netserver-0] Dec 25 13:02:21.022: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7409 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 25 13:02:21.022: INFO: >>> kubeConfig: /root/.kube/config Dec 25 13:02:22.497: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:02:22.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7409" for this suite. Dec 25 13:02:44.543: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:02:44.642: INFO: namespace pod-network-test-7409 deletion completed in 22.12799209s • [SLOW TEST:67.776 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:02:44.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 25 13:02:44.713: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ae2d29a3-6bf6-4daf-9e99-0252bef4e551" in namespace "projected-9387" to be "success or failure" Dec 25 13:02:44.718: INFO: Pod "downwardapi-volume-ae2d29a3-6bf6-4daf-9e99-0252bef4e551": Phase="Pending", Reason="", readiness=false. Elapsed: 4.55442ms Dec 25 13:02:46.726: INFO: Pod "downwardapi-volume-ae2d29a3-6bf6-4daf-9e99-0252bef4e551": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012547716s Dec 25 13:02:48.743: INFO: Pod "downwardapi-volume-ae2d29a3-6bf6-4daf-9e99-0252bef4e551": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029372411s Dec 25 13:02:50.761: INFO: Pod "downwardapi-volume-ae2d29a3-6bf6-4daf-9e99-0252bef4e551": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047633395s Dec 25 13:02:52.780: INFO: Pod "downwardapi-volume-ae2d29a3-6bf6-4daf-9e99-0252bef4e551": Phase="Pending", Reason="", readiness=false. Elapsed: 8.066416183s Dec 25 13:02:54.796: INFO: Pod "downwardapi-volume-ae2d29a3-6bf6-4daf-9e99-0252bef4e551": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.0822708s STEP: Saw pod success Dec 25 13:02:54.796: INFO: Pod "downwardapi-volume-ae2d29a3-6bf6-4daf-9e99-0252bef4e551" satisfied condition "success or failure" Dec 25 13:02:54.801: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-ae2d29a3-6bf6-4daf-9e99-0252bef4e551 container client-container: STEP: delete the pod Dec 25 13:02:54.935: INFO: Waiting for pod downwardapi-volume-ae2d29a3-6bf6-4daf-9e99-0252bef4e551 to disappear Dec 25 13:02:54.947: INFO: Pod downwardapi-volume-ae2d29a3-6bf6-4daf-9e99-0252bef4e551 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:02:54.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9387" for this suite. Dec 25 13:03:01.100: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:03:01.209: INFO: namespace projected-9387 deletion completed in 6.255647078s • [SLOW TEST:16.567 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:03:01.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Dec 25 13:03:01.366: INFO: Waiting up to 5m0s for pod "pod-74a32e59-1077-4ac9-a65b-ac0983b5335e" in namespace "emptydir-646" to be "success or failure" Dec 25 13:03:01.399: INFO: Pod "pod-74a32e59-1077-4ac9-a65b-ac0983b5335e": Phase="Pending", Reason="", readiness=false. Elapsed: 32.053959ms Dec 25 13:03:03.407: INFO: Pod "pod-74a32e59-1077-4ac9-a65b-ac0983b5335e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041002027s Dec 25 13:03:05.425: INFO: Pod "pod-74a32e59-1077-4ac9-a65b-ac0983b5335e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058133914s Dec 25 13:03:07.431: INFO: Pod "pod-74a32e59-1077-4ac9-a65b-ac0983b5335e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064254048s Dec 25 13:03:09.441: INFO: Pod "pod-74a32e59-1077-4ac9-a65b-ac0983b5335e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.074909811s Dec 25 13:03:11.449: INFO: Pod "pod-74a32e59-1077-4ac9-a65b-ac0983b5335e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.082972902s Dec 25 13:03:13.458: INFO: Pod "pod-74a32e59-1077-4ac9-a65b-ac0983b5335e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.09194262s STEP: Saw pod success Dec 25 13:03:13.459: INFO: Pod "pod-74a32e59-1077-4ac9-a65b-ac0983b5335e" satisfied condition "success or failure" Dec 25 13:03:13.466: INFO: Trying to get logs from node iruya-node pod pod-74a32e59-1077-4ac9-a65b-ac0983b5335e container test-container: STEP: delete the pod Dec 25 13:03:13.824: INFO: Waiting for pod pod-74a32e59-1077-4ac9-a65b-ac0983b5335e to disappear Dec 25 13:03:13.833: INFO: Pod pod-74a32e59-1077-4ac9-a65b-ac0983b5335e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:03:13.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-646" for this suite. Dec 25 13:03:19.868: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:03:20.005: INFO: namespace emptydir-646 deletion completed in 6.168577346s • [SLOW TEST:18.796 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:03:20.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-433dfa0e-0552-4d4c-b137-7341ff00e52c STEP: Creating a pod to test consume secrets Dec 25 13:03:20.104: INFO: Waiting up to 5m0s for pod "pod-secrets-88fa5a61-d884-41c8-b32f-924a2478bd6e" in namespace "secrets-3" to be "success or failure" Dec 25 13:03:20.110: INFO: Pod "pod-secrets-88fa5a61-d884-41c8-b32f-924a2478bd6e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.15745ms Dec 25 13:03:22.164: INFO: Pod "pod-secrets-88fa5a61-d884-41c8-b32f-924a2478bd6e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059903386s Dec 25 13:03:24.208: INFO: Pod "pod-secrets-88fa5a61-d884-41c8-b32f-924a2478bd6e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.103499581s Dec 25 13:03:26.219: INFO: Pod "pod-secrets-88fa5a61-d884-41c8-b32f-924a2478bd6e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.114874617s Dec 25 13:03:28.249: INFO: Pod "pod-secrets-88fa5a61-d884-41c8-b32f-924a2478bd6e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.144966805s Dec 25 13:03:30.262: INFO: Pod "pod-secrets-88fa5a61-d884-41c8-b32f-924a2478bd6e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.157725537s STEP: Saw pod success Dec 25 13:03:30.262: INFO: Pod "pod-secrets-88fa5a61-d884-41c8-b32f-924a2478bd6e" satisfied condition "success or failure" Dec 25 13:03:30.267: INFO: Trying to get logs from node iruya-node pod pod-secrets-88fa5a61-d884-41c8-b32f-924a2478bd6e container secret-volume-test: STEP: delete the pod Dec 25 13:03:30.411: INFO: Waiting for pod pod-secrets-88fa5a61-d884-41c8-b32f-924a2478bd6e to disappear Dec 25 13:03:30.417: INFO: Pod pod-secrets-88fa5a61-d884-41c8-b32f-924a2478bd6e no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:03:30.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3" for this suite. Dec 25 13:03:36.461: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:03:36.572: INFO: namespace secrets-3 deletion completed in 6.14332192s • [SLOW TEST:16.566 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:03:36.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating secret secrets-1938/secret-test-1da32fea-dbcc-4a2b-b7bb-e7940a47d395 STEP: Creating a pod to test consume secrets Dec 25 13:03:36.776: INFO: Waiting up to 5m0s for pod "pod-configmaps-2509a5af-1aa1-4c57-8c3f-e870d5cbad53" in namespace "secrets-1938" to be "success or failure" Dec 25 13:03:36.797: INFO: Pod "pod-configmaps-2509a5af-1aa1-4c57-8c3f-e870d5cbad53": Phase="Pending", Reason="", readiness=false. Elapsed: 20.771082ms Dec 25 13:03:38.805: INFO: Pod "pod-configmaps-2509a5af-1aa1-4c57-8c3f-e870d5cbad53": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028940431s Dec 25 13:03:40.811: INFO: Pod "pod-configmaps-2509a5af-1aa1-4c57-8c3f-e870d5cbad53": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035148807s Dec 25 13:03:42.817: INFO: Pod "pod-configmaps-2509a5af-1aa1-4c57-8c3f-e870d5cbad53": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041419149s Dec 25 13:03:44.836: INFO: Pod "pod-configmaps-2509a5af-1aa1-4c57-8c3f-e870d5cbad53": Phase="Pending", Reason="", readiness=false. Elapsed: 8.059583141s Dec 25 13:03:46.860: INFO: Pod "pod-configmaps-2509a5af-1aa1-4c57-8c3f-e870d5cbad53": Phase="Pending", Reason="", readiness=false. Elapsed: 10.083870375s Dec 25 13:03:48.871: INFO: Pod "pod-configmaps-2509a5af-1aa1-4c57-8c3f-e870d5cbad53": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.095309936s STEP: Saw pod success Dec 25 13:03:48.871: INFO: Pod "pod-configmaps-2509a5af-1aa1-4c57-8c3f-e870d5cbad53" satisfied condition "success or failure" Dec 25 13:03:48.879: INFO: Trying to get logs from node iruya-node pod pod-configmaps-2509a5af-1aa1-4c57-8c3f-e870d5cbad53 container env-test: STEP: delete the pod Dec 25 13:03:49.135: INFO: Waiting for pod pod-configmaps-2509a5af-1aa1-4c57-8c3f-e870d5cbad53 to disappear Dec 25 13:03:49.147: INFO: Pod pod-configmaps-2509a5af-1aa1-4c57-8c3f-e870d5cbad53 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:03:49.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1938" for this suite. Dec 25 13:03:57.241: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:03:57.375: INFO: namespace secrets-1938 deletion completed in 8.171918343s • [SLOW TEST:20.802 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:03:57.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 25 13:03:57.456: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:04:07.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6287" for this suite. Dec 25 13:04:59.959: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:05:00.077: INFO: namespace pods-6287 deletion completed in 52.150425926s • [SLOW TEST:62.702 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:05:00.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Dec 25 13:05:00.124: INFO: namespace kubectl-7344 Dec 25 13:05:00.124: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7344' Dec 25 13:05:00.397: INFO: stderr: "" Dec 25 13:05:00.398: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Dec 25 13:05:01.415: INFO: Selector matched 1 pods for map[app:redis] Dec 25 13:05:01.415: INFO: Found 0 / 1 Dec 25 13:05:02.409: INFO: Selector matched 1 pods for map[app:redis] Dec 25 13:05:02.410: INFO: Found 0 / 1 Dec 25 13:05:03.411: INFO: Selector matched 1 pods for map[app:redis] Dec 25 13:05:03.412: INFO: Found 0 / 1 Dec 25 13:05:04.412: INFO: Selector matched 1 pods for map[app:redis] Dec 25 13:05:04.412: INFO: Found 0 / 1 Dec 25 13:05:05.409: INFO: Selector matched 1 pods for map[app:redis] Dec 25 13:05:05.410: INFO: Found 0 / 1 Dec 25 13:05:06.411: INFO: Selector matched 1 pods for map[app:redis] Dec 25 13:05:06.411: INFO: Found 0 / 1 Dec 25 13:05:07.407: INFO: Selector matched 1 pods for map[app:redis] Dec 25 13:05:07.407: INFO: Found 0 / 1 Dec 25 13:05:08.413: INFO: Selector matched 1 pods for map[app:redis] Dec 25 13:05:08.414: INFO: Found 0 / 1 Dec 25 13:05:09.478: INFO: Selector matched 1 pods for map[app:redis] Dec 25 13:05:09.478: INFO: Found 0 / 1 Dec 25 13:05:10.411: INFO: Selector matched 1 pods for map[app:redis] Dec 25 13:05:10.411: INFO: Found 0 / 1 Dec 25 13:05:11.445: INFO: Selector matched 1 pods for map[app:redis] Dec 25 13:05:11.446: INFO: Found 1 / 1 Dec 25 13:05:11.446: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Dec 25 13:05:11.551: INFO: Selector matched 1 pods for map[app:redis] Dec 25 13:05:11.551: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Dec 25 13:05:11.551: INFO: wait on redis-master startup in kubectl-7344 Dec 25 13:05:11.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-tz9sh redis-master --namespace=kubectl-7344' Dec 25 13:05:11.711: INFO: stderr: "" Dec 25 13:05:11.711: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 25 Dec 13:05:09.407 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 25 Dec 13:05:09.409 # Server started, Redis version 3.2.12\n1:M 25 Dec 13:05:09.413 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 25 Dec 13:05:09.413 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Dec 25 13:05:11.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-7344' Dec 25 13:05:11.965: INFO: stderr: "" Dec 25 13:05:11.965: INFO: stdout: "service/rm2 exposed\n" Dec 25 13:05:11.977: INFO: Service rm2 in namespace kubectl-7344 found. STEP: exposing service Dec 25 13:05:13.992: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-7344' Dec 25 13:05:14.159: INFO: stderr: "" Dec 25 13:05:14.159: INFO: stdout: "service/rm3 exposed\n" Dec 25 13:05:14.172: INFO: Service rm3 in namespace kubectl-7344 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:05:16.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7344" for this suite. Dec 25 13:05:40.209: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:05:40.361: INFO: namespace kubectl-7344 deletion completed in 24.174292783s • [SLOW TEST:40.284 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:05:40.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-71e8d022-d33b-4cac-b894-e69322d95674 STEP: Creating a pod to test consume configMaps Dec 25 13:05:40.636: INFO: Waiting up to 5m0s for pod "pod-configmaps-845c5c3d-46b3-4d26-aad8-53242bbab6c3" in namespace "configmap-7239" to be "success or failure" Dec 25 13:05:40.661: INFO: Pod "pod-configmaps-845c5c3d-46b3-4d26-aad8-53242bbab6c3": Phase="Pending", Reason="", readiness=false. Elapsed: 24.695854ms Dec 25 13:05:42.682: INFO: Pod "pod-configmaps-845c5c3d-46b3-4d26-aad8-53242bbab6c3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045838519s Dec 25 13:05:44.687: INFO: Pod "pod-configmaps-845c5c3d-46b3-4d26-aad8-53242bbab6c3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050632682s Dec 25 13:05:46.694: INFO: Pod "pod-configmaps-845c5c3d-46b3-4d26-aad8-53242bbab6c3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058010991s Dec 25 13:05:48.705: INFO: Pod "pod-configmaps-845c5c3d-46b3-4d26-aad8-53242bbab6c3": Phase="Running", Reason="", readiness=true. Elapsed: 8.069127345s Dec 25 13:05:50.718: INFO: Pod "pod-configmaps-845c5c3d-46b3-4d26-aad8-53242bbab6c3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.082395883s STEP: Saw pod success Dec 25 13:05:50.719: INFO: Pod "pod-configmaps-845c5c3d-46b3-4d26-aad8-53242bbab6c3" satisfied condition "success or failure" Dec 25 13:05:50.736: INFO: Trying to get logs from node iruya-node pod pod-configmaps-845c5c3d-46b3-4d26-aad8-53242bbab6c3 container configmap-volume-test: STEP: delete the pod Dec 25 13:05:50.918: INFO: Waiting for pod pod-configmaps-845c5c3d-46b3-4d26-aad8-53242bbab6c3 to disappear Dec 25 13:05:50.927: INFO: Pod pod-configmaps-845c5c3d-46b3-4d26-aad8-53242bbab6c3 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:05:50.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7239" for this suite. Dec 25 13:05:56.965: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:05:57.071: INFO: namespace configmap-7239 deletion completed in 6.136076725s • [SLOW TEST:16.709 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:05:57.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-8153 STEP: creating a selector STEP: Creating the service pods in kubernetes Dec 25 13:05:57.263: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Dec 25 13:06:35.415: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-8153 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 25 13:06:35.415: INFO: >>> kubeConfig: /root/.kube/config Dec 25 13:06:36.083: INFO: Waiting for endpoints: map[] Dec 25 13:06:36.094: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-8153 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 25 13:06:36.095: INFO: >>> kubeConfig: /root/.kube/config Dec 25 13:06:36.529: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:06:36.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8153" for this suite. Dec 25 13:07:00.629: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:07:00.755: INFO: namespace pod-network-test-8153 deletion completed in 24.198102904s • [SLOW TEST:63.683 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:07:00.755: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Dec 25 13:07:00.909: INFO: Waiting up to 5m0s for pod "pod-a8a2a494-9de5-4d83-a95b-276e9d7fca9b" in namespace "emptydir-6615" to be "success or failure" Dec 25 13:07:00.918: INFO: Pod "pod-a8a2a494-9de5-4d83-a95b-276e9d7fca9b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.755031ms Dec 25 13:07:02.930: INFO: Pod "pod-a8a2a494-9de5-4d83-a95b-276e9d7fca9b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020513877s Dec 25 13:07:04.938: INFO: Pod "pod-a8a2a494-9de5-4d83-a95b-276e9d7fca9b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02876786s Dec 25 13:07:06.950: INFO: Pod "pod-a8a2a494-9de5-4d83-a95b-276e9d7fca9b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041193174s Dec 25 13:07:08.973: INFO: Pod "pod-a8a2a494-9de5-4d83-a95b-276e9d7fca9b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.063377569s Dec 25 13:07:10.998: INFO: Pod "pod-a8a2a494-9de5-4d83-a95b-276e9d7fca9b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.08833985s STEP: Saw pod success Dec 25 13:07:10.998: INFO: Pod "pod-a8a2a494-9de5-4d83-a95b-276e9d7fca9b" satisfied condition "success or failure" Dec 25 13:07:11.005: INFO: Trying to get logs from node iruya-node pod pod-a8a2a494-9de5-4d83-a95b-276e9d7fca9b container test-container: STEP: delete the pod Dec 25 13:07:11.104: INFO: Waiting for pod pod-a8a2a494-9de5-4d83-a95b-276e9d7fca9b to disappear Dec 25 13:07:11.140: INFO: Pod pod-a8a2a494-9de5-4d83-a95b-276e9d7fca9b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:07:11.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6615" for this suite. Dec 25 13:07:17.710: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:07:17.927: INFO: namespace emptydir-6615 deletion completed in 6.740159899s • [SLOW TEST:17.172 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:07:17.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on node default medium Dec 25 13:07:18.100: INFO: Waiting up to 5m0s for pod "pod-9bd7ba8f-7200-450d-bd84-3bfdb9d7b783" in namespace "emptydir-2815" to be "success or failure" Dec 25 13:07:18.186: INFO: Pod "pod-9bd7ba8f-7200-450d-bd84-3bfdb9d7b783": Phase="Pending", Reason="", readiness=false. Elapsed: 85.783454ms Dec 25 13:07:20.197: INFO: Pod "pod-9bd7ba8f-7200-450d-bd84-3bfdb9d7b783": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096693966s Dec 25 13:07:23.244: INFO: Pod "pod-9bd7ba8f-7200-450d-bd84-3bfdb9d7b783": Phase="Pending", Reason="", readiness=false. Elapsed: 5.144029211s Dec 25 13:07:25.257: INFO: Pod "pod-9bd7ba8f-7200-450d-bd84-3bfdb9d7b783": Phase="Pending", Reason="", readiness=false. Elapsed: 7.156860818s Dec 25 13:07:27.268: INFO: Pod "pod-9bd7ba8f-7200-450d-bd84-3bfdb9d7b783": Phase="Pending", Reason="", readiness=false. Elapsed: 9.167619303s Dec 25 13:07:29.276: INFO: Pod "pod-9bd7ba8f-7200-450d-bd84-3bfdb9d7b783": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.17613027s STEP: Saw pod success Dec 25 13:07:29.276: INFO: Pod "pod-9bd7ba8f-7200-450d-bd84-3bfdb9d7b783" satisfied condition "success or failure" Dec 25 13:07:29.285: INFO: Trying to get logs from node iruya-node pod pod-9bd7ba8f-7200-450d-bd84-3bfdb9d7b783 container test-container: STEP: delete the pod Dec 25 13:07:29.436: INFO: Waiting for pod pod-9bd7ba8f-7200-450d-bd84-3bfdb9d7b783 to disappear Dec 25 13:07:29.441: INFO: Pod pod-9bd7ba8f-7200-450d-bd84-3bfdb9d7b783 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:07:29.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2815" for this suite. Dec 25 13:07:35.473: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:07:35.609: INFO: namespace emptydir-2815 deletion completed in 6.162459353s • [SLOW TEST:17.681 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:07:35.609: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:07:45.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7163" for this suite. Dec 25 13:08:47.908: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:08:48.101: INFO: namespace kubelet-test-7163 deletion completed in 1m2.245906086s • [SLOW TEST:72.492 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:08:48.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 25 13:09:14.365: INFO: Container started at 2019-12-25 13:08:55 +0000 UTC, pod became ready at 2019-12-25 13:09:12 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:09:14.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6021" for this suite. Dec 25 13:09:36.399: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:09:36.659: INFO: namespace container-probe-6021 deletion completed in 22.288198763s • [SLOW TEST:48.558 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:09:36.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-dtt9 STEP: Creating a pod to test atomic-volume-subpath Dec 25 13:09:36.966: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-dtt9" in namespace "subpath-3637" to be "success or failure" Dec 25 13:09:36.991: INFO: Pod "pod-subpath-test-configmap-dtt9": Phase="Pending", Reason="", readiness=false. Elapsed: 24.266495ms Dec 25 13:09:39.002: INFO: Pod "pod-subpath-test-configmap-dtt9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03586258s Dec 25 13:09:41.014: INFO: Pod "pod-subpath-test-configmap-dtt9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047926274s Dec 25 13:09:43.020: INFO: Pod "pod-subpath-test-configmap-dtt9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053357389s Dec 25 13:09:45.030: INFO: Pod "pod-subpath-test-configmap-dtt9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.063238452s Dec 25 13:09:47.036: INFO: Pod "pod-subpath-test-configmap-dtt9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.069969794s Dec 25 13:09:49.045: INFO: Pod "pod-subpath-test-configmap-dtt9": Phase="Running", Reason="", readiness=true. Elapsed: 12.0784703s Dec 25 13:09:51.056: INFO: Pod "pod-subpath-test-configmap-dtt9": Phase="Running", Reason="", readiness=true. Elapsed: 14.089097116s Dec 25 13:09:53.063: INFO: Pod "pod-subpath-test-configmap-dtt9": Phase="Running", Reason="", readiness=true. Elapsed: 16.096757068s Dec 25 13:09:55.071: INFO: Pod "pod-subpath-test-configmap-dtt9": Phase="Running", Reason="", readiness=true. Elapsed: 18.104108655s Dec 25 13:09:57.078: INFO: Pod "pod-subpath-test-configmap-dtt9": Phase="Running", Reason="", readiness=true. Elapsed: 20.111785733s Dec 25 13:09:59.101: INFO: Pod "pod-subpath-test-configmap-dtt9": Phase="Running", Reason="", readiness=true. Elapsed: 22.134249384s Dec 25 13:10:01.116: INFO: Pod "pod-subpath-test-configmap-dtt9": Phase="Running", Reason="", readiness=true. Elapsed: 24.149084947s Dec 25 13:10:03.130: INFO: Pod "pod-subpath-test-configmap-dtt9": Phase="Running", Reason="", readiness=true. Elapsed: 26.163144222s Dec 25 13:10:05.143: INFO: Pod "pod-subpath-test-configmap-dtt9": Phase="Running", Reason="", readiness=true. Elapsed: 28.176640049s Dec 25 13:10:07.152: INFO: Pod "pod-subpath-test-configmap-dtt9": Phase="Running", Reason="", readiness=true. Elapsed: 30.185174456s Dec 25 13:10:09.166: INFO: Pod "pod-subpath-test-configmap-dtt9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.20003082s STEP: Saw pod success Dec 25 13:10:09.167: INFO: Pod "pod-subpath-test-configmap-dtt9" satisfied condition "success or failure" Dec 25 13:10:09.172: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-dtt9 container test-container-subpath-configmap-dtt9: STEP: delete the pod Dec 25 13:10:09.376: INFO: Waiting for pod pod-subpath-test-configmap-dtt9 to disappear Dec 25 13:10:09.386: INFO: Pod pod-subpath-test-configmap-dtt9 no longer exists STEP: Deleting pod pod-subpath-test-configmap-dtt9 Dec 25 13:10:09.386: INFO: Deleting pod "pod-subpath-test-configmap-dtt9" in namespace "subpath-3637" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:10:09.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3637" for this suite. Dec 25 13:10:15.421: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:10:15.516: INFO: namespace subpath-3637 deletion completed in 6.115920367s • [SLOW TEST:38.857 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:10:15.517: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 25 13:10:15.635: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Dec 25 13:10:15.652: INFO: Pod name sample-pod: Found 0 pods out of 1 Dec 25 13:10:20.665: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Dec 25 13:10:24.699: INFO: Creating deployment "test-rolling-update-deployment" Dec 25 13:10:24.709: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Dec 25 13:10:24.717: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Dec 25 13:10:26.733: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Dec 25 13:10:26.742: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712876224, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712876224, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712876224, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712876224, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 25 13:10:28.750: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712876224, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712876224, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712876224, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712876224, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 25 13:10:30.756: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712876224, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712876224, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712876224, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712876224, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 25 13:10:32.748: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Dec 25 13:10:32.766: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-9582,SelfLink:/apis/apps/v1/namespaces/deployment-9582/deployments/test-rolling-update-deployment,UID:689388d1-e4d3-4b1b-8857-3a09438bc387,ResourceVersion:18014108,Generation:1,CreationTimestamp:2019-12-25 13:10:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-12-25 13:10:24 +0000 UTC 2019-12-25 13:10:24 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-12-25 13:10:32 +0000 UTC 2019-12-25 13:10:24 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Dec 25 13:10:32.775: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-9582,SelfLink:/apis/apps/v1/namespaces/deployment-9582/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:852a749f-fade-443f-a3a5-aac8bdee7f18,ResourceVersion:18014096,Generation:1,CreationTimestamp:2019-12-25 13:10:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 689388d1-e4d3-4b1b-8857-3a09438bc387 0xc001eed0f7 0xc001eed0f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Dec 25 13:10:32.775: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Dec 25 13:10:32.775: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-9582,SelfLink:/apis/apps/v1/namespaces/deployment-9582/replicasets/test-rolling-update-controller,UID:79eb123a-5574-416e-88a1-3f5e08ecae4d,ResourceVersion:18014106,Generation:2,CreationTimestamp:2019-12-25 13:10:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 689388d1-e4d3-4b1b-8857-3a09438bc387 0xc001eed027 0xc001eed028}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Dec 25 13:10:32.780: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-x97tl" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-x97tl,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-9582,SelfLink:/api/v1/namespaces/deployment-9582/pods/test-rolling-update-deployment-79f6b9d75c-x97tl,UID:efbd05f2-528e-429d-9704-7df5291b0d71,ResourceVersion:18014095,Generation:0,CreationTimestamp:2019-12-25 13:10:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c 852a749f-fade-443f-a3a5-aac8bdee7f18 0xc002096c17 0xc002096c18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jkh94 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jkh94,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-jkh94 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002096ca0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002096cc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:10:24 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:10:32 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:10:32 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:10:24 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2019-12-25 13:10:24 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-12-25 13:10:31 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://f3920748775665cc15025070407dae2dc3e939228da8104948e0af068ac4adc1}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:10:32.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9582" for this suite. Dec 25 13:10:38.837: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:10:38.990: INFO: namespace deployment-9582 deletion completed in 6.203003973s • [SLOW TEST:23.473 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:10:38.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Dec 25 13:10:39.147: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Dec 25 13:10:39.159: INFO: Waiting for terminating namespaces to be deleted... Dec 25 13:10:39.162: INFO: Logging pods the kubelet thinks is on node iruya-node before test Dec 25 13:10:39.187: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded) Dec 25 13:10:39.187: INFO: Container kube-proxy ready: true, restart count 0 Dec 25 13:10:39.187: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded) Dec 25 13:10:39.187: INFO: Container weave ready: true, restart count 0 Dec 25 13:10:39.187: INFO: Container weave-npc ready: true, restart count 0 Dec 25 13:10:39.187: INFO: Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test Dec 25 13:10:39.201: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded) Dec 25 13:10:39.201: INFO: Container kube-apiserver ready: true, restart count 0 Dec 25 13:10:39.201: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded) Dec 25 13:10:39.201: INFO: Container kube-scheduler ready: true, restart count 7 Dec 25 13:10:39.201: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Dec 25 13:10:39.201: INFO: Container coredns ready: true, restart count 0 Dec 25 13:10:39.201: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded) Dec 25 13:10:39.201: INFO: Container etcd ready: true, restart count 0 Dec 25 13:10:39.201: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded) Dec 25 13:10:39.201: INFO: Container weave ready: true, restart count 0 Dec 25 13:10:39.201: INFO: Container weave-npc ready: true, restart count 0 Dec 25 13:10:39.201: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Dec 25 13:10:39.201: INFO: Container coredns ready: true, restart count 0 Dec 25 13:10:39.201: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded) Dec 25 13:10:39.201: INFO: Container kube-controller-manager ready: true, restart count 10 Dec 25 13:10:39.201: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded) Dec 25 13:10:39.201: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: verifying the node has the label node iruya-node STEP: verifying the node has the label node iruya-server-sfge57q7djm7 Dec 25 13:10:39.329: INFO: Pod coredns-5c98db65d4-bm4gs requesting resource cpu=100m on Node iruya-server-sfge57q7djm7 Dec 25 13:10:39.329: INFO: Pod coredns-5c98db65d4-xx8w8 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7 Dec 25 13:10:39.329: INFO: Pod etcd-iruya-server-sfge57q7djm7 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7 Dec 25 13:10:39.329: INFO: Pod kube-apiserver-iruya-server-sfge57q7djm7 requesting resource cpu=250m on Node iruya-server-sfge57q7djm7 Dec 25 13:10:39.329: INFO: Pod kube-controller-manager-iruya-server-sfge57q7djm7 requesting resource cpu=200m on Node iruya-server-sfge57q7djm7 Dec 25 13:10:39.329: INFO: Pod kube-proxy-58v95 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7 Dec 25 13:10:39.329: INFO: Pod kube-proxy-976zl requesting resource cpu=0m on Node iruya-node Dec 25 13:10:39.329: INFO: Pod kube-scheduler-iruya-server-sfge57q7djm7 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7 Dec 25 13:10:39.329: INFO: Pod weave-net-bzl4d requesting resource cpu=20m on Node iruya-server-sfge57q7djm7 Dec 25 13:10:39.329: INFO: Pod weave-net-rlp57 requesting resource cpu=20m on Node iruya-node STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-33db5b1d-2184-4991-a25b-5778ed6b2711.15e39f4b1c9cacc4], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6596/filler-pod-33db5b1d-2184-4991-a25b-5778ed6b2711 to iruya-server-sfge57q7djm7] STEP: Considering event: Type = [Normal], Name = [filler-pod-33db5b1d-2184-4991-a25b-5778ed6b2711.15e39f4c45452551], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-33db5b1d-2184-4991-a25b-5778ed6b2711.15e39f4d3db75719], Reason = [Created], Message = [Created container filler-pod-33db5b1d-2184-4991-a25b-5778ed6b2711] STEP: Considering event: Type = [Normal], Name = [filler-pod-33db5b1d-2184-4991-a25b-5778ed6b2711.15e39f4d69df8e0c], Reason = [Started], Message = [Started container filler-pod-33db5b1d-2184-4991-a25b-5778ed6b2711] STEP: Considering event: Type = [Normal], Name = [filler-pod-f4e5cf06-42bc-4883-a3ca-6b9c0bcf456f.15e39f4b1a51f7e2], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6596/filler-pod-f4e5cf06-42bc-4883-a3ca-6b9c0bcf456f to iruya-node] STEP: Considering event: Type = [Normal], Name = [filler-pod-f4e5cf06-42bc-4883-a3ca-6b9c0bcf456f.15e39f4cfab2b278], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-f4e5cf06-42bc-4883-a3ca-6b9c0bcf456f.15e39f4dc0cee8ed], Reason = [Created], Message = [Created container filler-pod-f4e5cf06-42bc-4883-a3ca-6b9c0bcf456f] STEP: Considering event: Type = [Normal], Name = [filler-pod-f4e5cf06-42bc-4883-a3ca-6b9c0bcf456f.15e39f4de29f04fa], Reason = [Started], Message = [Started container filler-pod-f4e5cf06-42bc-4883-a3ca-6b9c0bcf456f] STEP: Considering event: Type = [Warning], Name = [additional-pod.15e39f4e619b3099], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.] STEP: removing the label node off the node iruya-node STEP: verifying the node doesn't have the label node STEP: removing the label node off the node iruya-server-sfge57q7djm7 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:10:54.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6596" for this suite. Dec 25 13:11:02.752: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:11:02.843: INFO: namespace sched-pred-6596 deletion completed in 8.116058987s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:23.853 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:11:02.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-secret-4gck STEP: Creating a pod to test atomic-volume-subpath Dec 25 13:11:04.654: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-4gck" in namespace "subpath-8697" to be "success or failure" Dec 25 13:11:04.677: INFO: Pod "pod-subpath-test-secret-4gck": Phase="Pending", Reason="", readiness=false. Elapsed: 22.040972ms Dec 25 13:11:06.684: INFO: Pod "pod-subpath-test-secret-4gck": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02951056s Dec 25 13:11:08.696: INFO: Pod "pod-subpath-test-secret-4gck": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04152508s Dec 25 13:11:10.707: INFO: Pod "pod-subpath-test-secret-4gck": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052473962s Dec 25 13:11:12.715: INFO: Pod "pod-subpath-test-secret-4gck": Phase="Pending", Reason="", readiness=false. Elapsed: 8.060473816s Dec 25 13:11:14.740: INFO: Pod "pod-subpath-test-secret-4gck": Phase="Running", Reason="", readiness=true. Elapsed: 10.084909069s Dec 25 13:11:16.759: INFO: Pod "pod-subpath-test-secret-4gck": Phase="Running", Reason="", readiness=true. Elapsed: 12.104177546s Dec 25 13:11:18.774: INFO: Pod "pod-subpath-test-secret-4gck": Phase="Running", Reason="", readiness=true. Elapsed: 14.119493211s Dec 25 13:11:20.789: INFO: Pod "pod-subpath-test-secret-4gck": Phase="Running", Reason="", readiness=true. Elapsed: 16.134073447s Dec 25 13:11:22.816: INFO: Pod "pod-subpath-test-secret-4gck": Phase="Running", Reason="", readiness=true. Elapsed: 18.161476064s Dec 25 13:11:24.827: INFO: Pod "pod-subpath-test-secret-4gck": Phase="Running", Reason="", readiness=true. Elapsed: 20.171899998s Dec 25 13:11:26.835: INFO: Pod "pod-subpath-test-secret-4gck": Phase="Running", Reason="", readiness=true. Elapsed: 22.180232142s Dec 25 13:11:28.865: INFO: Pod "pod-subpath-test-secret-4gck": Phase="Running", Reason="", readiness=true. Elapsed: 24.209878569s Dec 25 13:11:30.881: INFO: Pod "pod-subpath-test-secret-4gck": Phase="Running", Reason="", readiness=true. Elapsed: 26.226039908s Dec 25 13:11:32.907: INFO: Pod "pod-subpath-test-secret-4gck": Phase="Running", Reason="", readiness=true. Elapsed: 28.251878639s Dec 25 13:11:35.735: INFO: Pod "pod-subpath-test-secret-4gck": Phase="Succeeded", Reason="", readiness=false. Elapsed: 31.080234552s STEP: Saw pod success Dec 25 13:11:35.735: INFO: Pod "pod-subpath-test-secret-4gck" satisfied condition "success or failure" Dec 25 13:11:35.741: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-secret-4gck container test-container-subpath-secret-4gck: STEP: delete the pod Dec 25 13:11:36.120: INFO: Waiting for pod pod-subpath-test-secret-4gck to disappear Dec 25 13:11:36.152: INFO: Pod pod-subpath-test-secret-4gck no longer exists STEP: Deleting pod pod-subpath-test-secret-4gck Dec 25 13:11:36.153: INFO: Deleting pod "pod-subpath-test-secret-4gck" in namespace "subpath-8697" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:11:36.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8697" for this suite. Dec 25 13:11:42.219: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:11:42.332: INFO: namespace subpath-8697 deletion completed in 6.164532194s • [SLOW TEST:39.488 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:11:42.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Dec 25 13:11:42.408: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Dec 25 13:11:42.416: INFO: Waiting for terminating namespaces to be deleted... Dec 25 13:11:42.419: INFO: Logging pods the kubelet thinks is on node iruya-node before test Dec 25 13:11:42.428: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded) Dec 25 13:11:42.428: INFO: Container kube-proxy ready: true, restart count 0 Dec 25 13:11:42.428: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded) Dec 25 13:11:42.428: INFO: Container weave ready: true, restart count 0 Dec 25 13:11:42.428: INFO: Container weave-npc ready: true, restart count 0 Dec 25 13:11:42.428: INFO: Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test Dec 25 13:11:42.435: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded) Dec 25 13:11:42.436: INFO: Container kube-apiserver ready: true, restart count 0 Dec 25 13:11:42.436: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded) Dec 25 13:11:42.436: INFO: Container kube-scheduler ready: true, restart count 7 Dec 25 13:11:42.436: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Dec 25 13:11:42.436: INFO: Container coredns ready: true, restart count 0 Dec 25 13:11:42.436: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded) Dec 25 13:11:42.436: INFO: Container etcd ready: true, restart count 0 Dec 25 13:11:42.436: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded) Dec 25 13:11:42.436: INFO: Container weave ready: true, restart count 0 Dec 25 13:11:42.436: INFO: Container weave-npc ready: true, restart count 0 Dec 25 13:11:42.436: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Dec 25 13:11:42.436: INFO: Container coredns ready: true, restart count 0 Dec 25 13:11:42.436: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded) Dec 25 13:11:42.436: INFO: Container kube-controller-manager ready: true, restart count 10 Dec 25 13:11:42.436: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded) Dec 25 13:11:42.436: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-3ceaef73-2110-4ae7-8b1c-5fc74a49c757 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-3ceaef73-2110-4ae7-8b1c-5fc74a49c757 off the node iruya-node STEP: verifying the node doesn't have the label kubernetes.io/e2e-3ceaef73-2110-4ae7-8b1c-5fc74a49c757 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:12:02.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-68" for this suite. Dec 25 13:12:16.824: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:12:16.996: INFO: namespace sched-pred-68 deletion completed in 14.25542967s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:34.664 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:12:16.997: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Dec 25 13:12:27.938: INFO: Successfully updated pod "labelsupdate0a328a13-c1d0-4b3e-97cb-b29c52c06ce8" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:12:32.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4684" for this suite. Dec 25 13:12:56.176: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:12:56.292: INFO: namespace projected-4684 deletion completed in 24.175863188s • [SLOW TEST:39.295 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:12:56.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2334.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-2334.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2334.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2334.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-2334.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2334.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Dec 25 13:13:08.529: INFO: Unable to read wheezy_udp@PodARecord from pod dns-2334/dns-test-fc3ce782-c32d-4218-8d50-052f54dd5cb9: the server could not find the requested resource (get pods dns-test-fc3ce782-c32d-4218-8d50-052f54dd5cb9) Dec 25 13:13:08.579: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-2334/dns-test-fc3ce782-c32d-4218-8d50-052f54dd5cb9: the server could not find the requested resource (get pods dns-test-fc3ce782-c32d-4218-8d50-052f54dd5cb9) Dec 25 13:13:08.594: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-2334.svc.cluster.local from pod dns-2334/dns-test-fc3ce782-c32d-4218-8d50-052f54dd5cb9: the server could not find the requested resource (get pods dns-test-fc3ce782-c32d-4218-8d50-052f54dd5cb9) Dec 25 13:13:08.605: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-2334/dns-test-fc3ce782-c32d-4218-8d50-052f54dd5cb9: the server could not find the requested resource (get pods dns-test-fc3ce782-c32d-4218-8d50-052f54dd5cb9) Dec 25 13:13:08.615: INFO: Unable to read jessie_udp@PodARecord from pod dns-2334/dns-test-fc3ce782-c32d-4218-8d50-052f54dd5cb9: the server could not find the requested resource (get pods dns-test-fc3ce782-c32d-4218-8d50-052f54dd5cb9) Dec 25 13:13:08.623: INFO: Unable to read jessie_tcp@PodARecord from pod dns-2334/dns-test-fc3ce782-c32d-4218-8d50-052f54dd5cb9: the server could not find the requested resource (get pods dns-test-fc3ce782-c32d-4218-8d50-052f54dd5cb9) Dec 25 13:13:08.623: INFO: Lookups using dns-2334/dns-test-fc3ce782-c32d-4218-8d50-052f54dd5cb9 failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-2334.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord] Dec 25 13:13:13.702: INFO: DNS probes using dns-2334/dns-test-fc3ce782-c32d-4218-8d50-052f54dd5cb9 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:13:13.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2334" for this suite. Dec 25 13:13:19.959: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:13:20.073: INFO: namespace dns-2334 deletion completed in 6.222362163s • [SLOW TEST:23.780 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:13:20.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 25 13:13:20.275: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-796' Dec 25 13:13:23.069: INFO: stderr: "" Dec 25 13:13:23.069: INFO: stdout: "replicationcontroller/redis-master created\n" Dec 25 13:13:23.070: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-796' Dec 25 13:13:23.567: INFO: stderr: "" Dec 25 13:13:23.568: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Dec 25 13:13:24.585: INFO: Selector matched 1 pods for map[app:redis] Dec 25 13:13:24.585: INFO: Found 0 / 1 Dec 25 13:13:25.580: INFO: Selector matched 1 pods for map[app:redis] Dec 25 13:13:25.580: INFO: Found 0 / 1 Dec 25 13:13:26.589: INFO: Selector matched 1 pods for map[app:redis] Dec 25 13:13:26.590: INFO: Found 0 / 1 Dec 25 13:13:27.578: INFO: Selector matched 1 pods for map[app:redis] Dec 25 13:13:27.578: INFO: Found 0 / 1 Dec 25 13:13:28.648: INFO: Selector matched 1 pods for map[app:redis] Dec 25 13:13:28.648: INFO: Found 0 / 1 Dec 25 13:13:29.579: INFO: Selector matched 1 pods for map[app:redis] Dec 25 13:13:29.579: INFO: Found 0 / 1 Dec 25 13:13:30.582: INFO: Selector matched 1 pods for map[app:redis] Dec 25 13:13:30.583: INFO: Found 0 / 1 Dec 25 13:13:31.578: INFO: Selector matched 1 pods for map[app:redis] Dec 25 13:13:31.578: INFO: Found 1 / 1 Dec 25 13:13:31.578: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Dec 25 13:13:31.581: INFO: Selector matched 1 pods for map[app:redis] Dec 25 13:13:31.582: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Dec 25 13:13:31.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-jhm82 --namespace=kubectl-796' Dec 25 13:13:31.723: INFO: stderr: "" Dec 25 13:13:31.724: INFO: stdout: "Name: redis-master-jhm82\nNamespace: kubectl-796\nPriority: 0\nNode: iruya-node/10.96.3.65\nStart Time: Wed, 25 Dec 2019 13:13:23 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.44.0.1\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: docker://bc9e35e742620a42dd6889f47038908f1498b946bc8b8d00e8ba2b7e8005e390\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Wed, 25 Dec 2019 13:13:31 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-6vv9g (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-6vv9g:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-6vv9g\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 8s default-scheduler Successfully assigned kubectl-796/redis-master-jhm82 to iruya-node\n Normal Pulled 4s kubelet, iruya-node Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 1s kubelet, iruya-node Created container redis-master\n Normal Started 0s kubelet, iruya-node Started container redis-master\n" Dec 25 13:13:31.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-796' Dec 25 13:13:31.858: INFO: stderr: "" Dec 25 13:13:31.859: INFO: stdout: "Name: redis-master\nNamespace: kubectl-796\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 8s replication-controller Created pod: redis-master-jhm82\n" Dec 25 13:13:31.859: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-796' Dec 25 13:13:31.978: INFO: stderr: "" Dec 25 13:13:31.978: INFO: stdout: "Name: redis-master\nNamespace: kubectl-796\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.106.129.94\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.44.0.1:6379\nSession Affinity: None\nEvents: \n" Dec 25 13:13:31.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-node' Dec 25 13:13:32.139: INFO: stderr: "" Dec 25 13:13:32.139: INFO: stdout: "Name: iruya-node\nRoles: \nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=iruya-node\n kubernetes.io/os=linux\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 04 Aug 2019 09:01:39 +0000\nTaints: \nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n NetworkUnavailable False Sat, 12 Oct 2019 11:56:49 +0000 Sat, 12 Oct 2019 11:56:49 +0000 WeaveIsUp Weave pod has set this\n MemoryPressure False Wed, 25 Dec 2019 13:12:39 +0000 Sun, 04 Aug 2019 09:01:39 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Wed, 25 Dec 2019 13:12:39 +0000 Sun, 04 Aug 2019 09:01:39 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Wed, 25 Dec 2019 13:12:39 +0000 Sun, 04 Aug 2019 09:01:39 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Wed, 25 Dec 2019 13:12:39 +0000 Sun, 04 Aug 2019 09:02:19 +0000 KubeletReady kubelet is posting ready status. AppArmor enabled\nAddresses:\n InternalIP: 10.96.3.65\n Hostname: iruya-node\nCapacity:\n cpu: 4\n ephemeral-storage: 20145724Ki\n hugepages-2Mi: 0\n memory: 4039076Ki\n pods: 110\nAllocatable:\n cpu: 4\n ephemeral-storage: 18566299208\n hugepages-2Mi: 0\n memory: 3936676Ki\n pods: 110\nSystem Info:\n Machine ID: f573dcf04d6f4a87856a35d266a2fa7a\n System UUID: F573DCF0-4D6F-4A87-856A-35D266A2FA7A\n Boot ID: 8baf4beb-8391-43e6-b17b-b1e184b5370a\n Kernel Version: 4.15.0-52-generic\n OS Image: Ubuntu 18.04.2 LTS\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: docker://18.9.7\n Kubelet Version: v1.15.1\n Kube-Proxy Version: v1.15.1\nPodCIDR: 10.96.1.0/24\nNon-terminated Pods: (3 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system kube-proxy-976zl 0 (0%) 0 (0%) 0 (0%) 0 (0%) 143d\n kube-system weave-net-rlp57 20m (0%) 0 (0%) 0 (0%) 0 (0%) 74d\n kubectl-796 redis-master-jhm82 0 (0%) 0 (0%) 0 (0%) 0 (0%) 9s\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 20m (0%) 0 (0%)\n memory 0 (0%) 0 (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Dec 25 13:13:32.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-796' Dec 25 13:13:32.251: INFO: stderr: "" Dec 25 13:13:32.251: INFO: stdout: "Name: kubectl-796\nLabels: e2e-framework=kubectl\n e2e-run=4f1071a8-5753-45ec-9db3-dacd05d6ae4a\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:13:32.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-796" for this suite. Dec 25 13:13:54.347: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:13:54.437: INFO: namespace kubectl-796 deletion completed in 22.181666847s • [SLOW TEST:34.364 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:13:54.437: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-c4c9c354-050e-43d1-a97c-46d55002bda0 STEP: Creating a pod to test consume secrets Dec 25 13:13:54.664: INFO: Waiting up to 5m0s for pod "pod-secrets-1068a991-a621-4382-a467-333f31c51b00" in namespace "secrets-9588" to be "success or failure" Dec 25 13:13:54.693: INFO: Pod "pod-secrets-1068a991-a621-4382-a467-333f31c51b00": Phase="Pending", Reason="", readiness=false. Elapsed: 29.257577ms Dec 25 13:13:56.700: INFO: Pod "pod-secrets-1068a991-a621-4382-a467-333f31c51b00": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036375591s Dec 25 13:13:58.732: INFO: Pod "pod-secrets-1068a991-a621-4382-a467-333f31c51b00": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067780288s Dec 25 13:14:00.747: INFO: Pod "pod-secrets-1068a991-a621-4382-a467-333f31c51b00": Phase="Pending", Reason="", readiness=false. Elapsed: 6.083299735s Dec 25 13:14:02.758: INFO: Pod "pod-secrets-1068a991-a621-4382-a467-333f31c51b00": Phase="Pending", Reason="", readiness=false. Elapsed: 8.094513629s Dec 25 13:14:04.782: INFO: Pod "pod-secrets-1068a991-a621-4382-a467-333f31c51b00": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.117827966s STEP: Saw pod success Dec 25 13:14:04.782: INFO: Pod "pod-secrets-1068a991-a621-4382-a467-333f31c51b00" satisfied condition "success or failure" Dec 25 13:14:04.792: INFO: Trying to get logs from node iruya-node pod pod-secrets-1068a991-a621-4382-a467-333f31c51b00 container secret-env-test: STEP: delete the pod Dec 25 13:14:04.900: INFO: Waiting for pod pod-secrets-1068a991-a621-4382-a467-333f31c51b00 to disappear Dec 25 13:14:04.968: INFO: Pod pod-secrets-1068a991-a621-4382-a467-333f31c51b00 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:14:04.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9588" for this suite. Dec 25 13:14:11.067: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:14:11.174: INFO: namespace secrets-9588 deletion completed in 6.1964814s • [SLOW TEST:16.736 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:14:11.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 25 13:14:11.323: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0cceac79-b5e8-488d-ad3d-77a45c0662f5" in namespace "downward-api-6128" to be "success or failure" Dec 25 13:14:11.332: INFO: Pod "downwardapi-volume-0cceac79-b5e8-488d-ad3d-77a45c0662f5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.501086ms Dec 25 13:14:13.348: INFO: Pod "downwardapi-volume-0cceac79-b5e8-488d-ad3d-77a45c0662f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024820506s Dec 25 13:14:15.386: INFO: Pod "downwardapi-volume-0cceac79-b5e8-488d-ad3d-77a45c0662f5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062777451s Dec 25 13:14:17.456: INFO: Pod "downwardapi-volume-0cceac79-b5e8-488d-ad3d-77a45c0662f5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.132152618s Dec 25 13:14:19.466: INFO: Pod "downwardapi-volume-0cceac79-b5e8-488d-ad3d-77a45c0662f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.142110354s STEP: Saw pod success Dec 25 13:14:19.466: INFO: Pod "downwardapi-volume-0cceac79-b5e8-488d-ad3d-77a45c0662f5" satisfied condition "success or failure" Dec 25 13:14:19.469: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-0cceac79-b5e8-488d-ad3d-77a45c0662f5 container client-container: STEP: delete the pod Dec 25 13:14:19.617: INFO: Waiting for pod downwardapi-volume-0cceac79-b5e8-488d-ad3d-77a45c0662f5 to disappear Dec 25 13:14:19.637: INFO: Pod downwardapi-volume-0cceac79-b5e8-488d-ad3d-77a45c0662f5 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:14:19.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6128" for this suite. Dec 25 13:14:25.718: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:14:25.870: INFO: namespace downward-api-6128 deletion completed in 6.225559064s • [SLOW TEST:14.696 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:14:25.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on tmpfs Dec 25 13:14:26.064: INFO: Waiting up to 5m0s for pod "pod-005cf454-f75a-43cc-9903-4ddfb852f4c3" in namespace "emptydir-1554" to be "success or failure" Dec 25 13:14:26.068: INFO: Pod "pod-005cf454-f75a-43cc-9903-4ddfb852f4c3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.339413ms Dec 25 13:14:28.076: INFO: Pod "pod-005cf454-f75a-43cc-9903-4ddfb852f4c3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011527547s Dec 25 13:14:30.082: INFO: Pod "pod-005cf454-f75a-43cc-9903-4ddfb852f4c3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017951073s Dec 25 13:14:32.098: INFO: Pod "pod-005cf454-f75a-43cc-9903-4ddfb852f4c3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034118733s Dec 25 13:14:34.145: INFO: Pod "pod-005cf454-f75a-43cc-9903-4ddfb852f4c3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.080476673s Dec 25 13:14:36.151: INFO: Pod "pod-005cf454-f75a-43cc-9903-4ddfb852f4c3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.087077859s STEP: Saw pod success Dec 25 13:14:36.152: INFO: Pod "pod-005cf454-f75a-43cc-9903-4ddfb852f4c3" satisfied condition "success or failure" Dec 25 13:14:36.155: INFO: Trying to get logs from node iruya-node pod pod-005cf454-f75a-43cc-9903-4ddfb852f4c3 container test-container: STEP: delete the pod Dec 25 13:14:36.341: INFO: Waiting for pod pod-005cf454-f75a-43cc-9903-4ddfb852f4c3 to disappear Dec 25 13:14:36.345: INFO: Pod pod-005cf454-f75a-43cc-9903-4ddfb852f4c3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:14:36.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1554" for this suite. Dec 25 13:14:42.377: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:14:42.469: INFO: namespace emptydir-1554 deletion completed in 6.118498764s • [SLOW TEST:16.597 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:14:42.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-9179 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating stateful set ss in namespace statefulset-9179 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9179 Dec 25 13:14:42.817: INFO: Found 0 stateful pods, waiting for 1 Dec 25 13:14:52.911: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Dec 25 13:14:52.917: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9179 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 25 13:14:53.569: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" Dec 25 13:14:53.570: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 25 13:14:53.570: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 25 13:14:53.578: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Dec 25 13:15:03.591: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Dec 25 13:15:03.591: INFO: Waiting for statefulset status.replicas updated to 0 Dec 25 13:15:03.737: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999997234s Dec 25 13:15:05.224: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.889614439s Dec 25 13:15:07.541: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.402937542s Dec 25 13:15:08.565: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.085677422s Dec 25 13:15:09.574: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.06222193s Dec 25 13:15:11.005: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.052118321s Dec 25 13:15:12.025: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.621751949s Dec 25 13:15:13.040: INFO: Verifying statefulset ss doesn't scale past 3 for another 601.755653ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9179 Dec 25 13:15:14.056: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9179 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 25 13:15:14.702: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n" Dec 25 13:15:14.703: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 25 13:15:14.703: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 25 13:15:14.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9179 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 25 13:15:15.146: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Dec 25 13:15:15.146: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 25 13:15:15.146: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 25 13:15:15.147: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9179 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 25 13:15:15.710: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Dec 25 13:15:15.711: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 25 13:15:15.711: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 25 13:15:15.718: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Dec 25 13:15:15.719: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Dec 25 13:15:15.719: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Dec 25 13:15:15.723: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9179 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 25 13:15:16.229: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" Dec 25 13:15:16.229: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 25 13:15:16.229: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 25 13:15:16.230: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9179 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 25 13:15:16.660: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" Dec 25 13:15:16.660: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 25 13:15:16.660: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 25 13:15:16.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9179 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 25 13:15:17.151: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" Dec 25 13:15:17.152: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 25 13:15:17.152: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 25 13:15:17.152: INFO: Waiting for statefulset status.replicas updated to 0 Dec 25 13:15:17.157: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Dec 25 13:15:27.168: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Dec 25 13:15:27.168: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Dec 25 13:15:27.168: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Dec 25 13:15:27.186: INFO: POD NODE PHASE GRACE CONDITIONS Dec 25 13:15:27.186: INFO: ss-0 iruya-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:14:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:14:42 +0000 UTC }] Dec 25 13:15:27.186: INFO: ss-1 iruya-server-sfge57q7djm7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:03 +0000 UTC }] Dec 25 13:15:27.186: INFO: ss-2 iruya-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:03 +0000 UTC }] Dec 25 13:15:27.186: INFO: Dec 25 13:15:27.186: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 25 13:15:28.930: INFO: POD NODE PHASE GRACE CONDITIONS Dec 25 13:15:28.930: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:14:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:14:42 +0000 UTC }] Dec 25 13:15:28.930: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:03 +0000 UTC }] Dec 25 13:15:28.930: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:03 +0000 UTC }] Dec 25 13:15:28.930: INFO: Dec 25 13:15:28.930: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 25 13:15:30.267: INFO: POD NODE PHASE GRACE CONDITIONS Dec 25 13:15:30.268: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:14:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:14:42 +0000 UTC }] Dec 25 13:15:30.268: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:03 +0000 UTC }] Dec 25 13:15:30.268: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:03 +0000 UTC }] Dec 25 13:15:30.268: INFO: Dec 25 13:15:30.268: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 25 13:15:31.300: INFO: POD NODE PHASE GRACE CONDITIONS Dec 25 13:15:31.300: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:14:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:14:42 +0000 UTC }] Dec 25 13:15:31.300: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:03 +0000 UTC }] Dec 25 13:15:31.300: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:03 +0000 UTC }] Dec 25 13:15:31.300: INFO: Dec 25 13:15:31.300: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 25 13:15:32.796: INFO: POD NODE PHASE GRACE CONDITIONS Dec 25 13:15:32.797: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:14:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:14:42 +0000 UTC }] Dec 25 13:15:32.797: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:03 +0000 UTC }] Dec 25 13:15:32.797: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:03 +0000 UTC }] Dec 25 13:15:32.797: INFO: Dec 25 13:15:32.797: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 25 13:15:33.806: INFO: POD NODE PHASE GRACE CONDITIONS Dec 25 13:15:33.807: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:14:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:14:42 +0000 UTC }] Dec 25 13:15:33.807: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:03 +0000 UTC }] Dec 25 13:15:33.807: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:03 +0000 UTC }] Dec 25 13:15:33.807: INFO: Dec 25 13:15:33.807: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 25 13:15:34.815: INFO: POD NODE PHASE GRACE CONDITIONS Dec 25 13:15:34.815: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:14:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:14:42 +0000 UTC }] Dec 25 13:15:34.815: INFO: ss-1 iruya-server-sfge57q7djm7 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:03 +0000 UTC }] Dec 25 13:15:34.815: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:03 +0000 UTC }] Dec 25 13:15:34.815: INFO: Dec 25 13:15:34.815: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 25 13:15:35.833: INFO: POD NODE PHASE GRACE CONDITIONS Dec 25 13:15:35.833: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:14:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:14:42 +0000 UTC }] Dec 25 13:15:35.833: INFO: ss-1 iruya-server-sfge57q7djm7 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:03 +0000 UTC }] Dec 25 13:15:35.833: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:03 +0000 UTC }] Dec 25 13:15:35.833: INFO: Dec 25 13:15:35.833: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 25 13:15:36.859: INFO: POD NODE PHASE GRACE CONDITIONS Dec 25 13:15:36.859: INFO: ss-0 iruya-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:14:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:14:42 +0000 UTC }] Dec 25 13:15:36.860: INFO: ss-1 iruya-server-sfge57q7djm7 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:03 +0000 UTC }] Dec 25 13:15:36.860: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:15:03 +0000 UTC }] Dec 25 13:15:36.860: INFO: Dec 25 13:15:36.860: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9179 Dec 25 13:15:37.874: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9179 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 25 13:15:38.074: INFO: rc: 1 Dec 25 13:15:38.075: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9179 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc002512000 exit status 1 true [0xc002c5a4b8 0xc002c5a4f8 0xc002c5a528] [0xc002c5a4b8 0xc002c5a4f8 0xc002c5a528] [0xc002c5a4f0 0xc002c5a508] [0xba6c50 0xba6c50] 0xc002cad380 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Dec 25 13:15:48.075: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9179 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 25 13:15:48.244: INFO: rc: 1 Dec 25 13:15:48.245: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9179 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0026b81b0 exit status 1 true [0xc001995368 0xc001995380 0xc0019953b0] [0xc001995368 0xc001995380 0xc0019953b0] [0xc001995378 0xc0019953a0] [0xba6c50 0xba6c50] 0xc002cfae40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 25 13:15:58.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9179 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 25 13:15:58.386: INFO: rc: 1 Dec 25 13:15:58.387: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9179 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002b47500 exit status 1 true [0xc002970e58 0xc002970e70 0xc002970e88] [0xc002970e58 0xc002970e70 0xc002970e88] [0xc002970e68 0xc002970e80] [0xba6c50 0xba6c50] 0xc0026774a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 25 13:16:08.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9179 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 25 13:16:08.554: INFO: rc: 1 Dec 25 13:16:08.555: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9179 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0025120f0 exit status 1 true [0xc002c5a540 0xc002c5a558 0xc002c5a598] [0xc002c5a540 0xc002c5a558 0xc002c5a598] [0xc002c5a550 0xc002c5a590] [0xba6c50 0xba6c50] 0xc002cad680 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 25 13:16:18.556: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9179 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 25 13:16:18.690: INFO: rc: 1 Dec 25 13:16:18.691: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9179 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0026b81e0 exit status 1 true [0xc000984bb0 0xc000984ca8 0xc000984e28] [0xc000984bb0 0xc000984ca8 0xc000984e28] [0xc000984c88 0xc000984dd8] [0xba6c50 0xba6c50] 0xc002cfaf00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 25 13:16:28.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9179 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 25 13:16:28.896: INFO: rc: 1 Dec 25 13:16:28.897: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9179 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001eb0090 exit status 1 true [0xc00037c108 0xc00037cac0 0xc00037cb70] [0xc00037c108 0xc00037cac0 0xc00037cb70] [0xc00037cab0 0xc00037cb28] [0xba6c50 0xba6c50] 0xc002d70540 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 25 13:16:38.898: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9179 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 25 13:16:39.704: INFO: rc: 1 Dec 25 13:16:39.704: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9179 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002c5e0c0 exit status 1 true [0xc000351b08 0xc000351ee8 0xc000010050] [0xc000351b08 0xc000351ee8 0xc000010050] [0xc000351e10 0xc000351fd8] [0xba6c50 0xba6c50] 0xc001a37620 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 25 13:16:49.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9179 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 25 13:16:49.880: INFO: rc: 1 Dec 25 13:16:49.880: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9179 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00114e0f0 exit status 1 true [0xc002970000 0xc002970018 0xc002970030] [0xc002970000 0xc002970018 0xc002970030] [0xc002970010 0xc002970028] [0xba6c50 0xba6c50] 0xc00144e900 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 25 13:16:59.881: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9179 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 25 13:17:00.004: INFO: rc: 1 Dec 25 13:17:00.004: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9179 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0013e2090 exit status 1 true [0xc00241e000 0xc00241e018 0xc00241e030] [0xc00241e000 0xc00241e018 0xc00241e030] [0xc00241e010 0xc00241e028] [0xba6c50 0xba6c50] 0xc001a719e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 25 13:17:10.006: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9179 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 25 13:17:10.156: INFO: rc: 1 Dec 25 13:17:10.156: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9179 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0013e2150 exit status 1 true [0xc00241e038 0xc00241e050 0xc00241e068] [0xc00241e038 0xc00241e050 0xc00241e068] [0xc00241e048 0xc00241e060] [0xba6c50 0xba6c50] 0xc001cd5260 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 25 13:17:20.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9179 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 25 13:17:20.347: INFO: rc: 1 Dec 25 13:17:20.348: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9179 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002c5e1e0 exit status 1 true [0xc000010058 0xc0000115c0 0xc000011630] [0xc000010058 0xc0000115c0 0xc000011630] [0xc000011588 0xc0000115f8] [0xba6c50 0xba6c50] 0xc001dd2360 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 25 13:17:30.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9179 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 25 13:17:30.491: INFO: rc: 1 Dec 25 13:17:30.492: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9179 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001eb0150 exit status 1 true [0xc00037cbc8 0xc00037d358 0xc00037d5c8] [0xc00037cbc8 0xc00037d358 0xc00037d5c8] [0xc00037d338 0xc00037d5b8] [0xba6c50 0xba6c50] 0xc002d70900 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 25 13:17:40.492: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9179 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 25 13:17:40.648: INFO: rc: 1 Dec 25 13:17:40.649: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9179 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002c5e2a0 exit status 1 true [0xc000011648 0xc0000117a0 0xc000011848] [0xc000011648 0xc0000117a0 0xc000011848] [0xc000011720 0xc000011818] [0xba6c50 0xba6c50] 0xc0020bac60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 25 13:17:50.650: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9179 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 25 13:17:50.755: INFO: rc: 1 Dec 25 13:17:50.756: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9179 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0013e2240 exit status 1 true [0xc00241e070 0xc00241e088 0xc00241e0a0] [0xc00241e070 0xc00241e088 0xc00241e0a0] [0xc00241e080 0xc00241e098] [0xba6c50 0xba6c50] 0xc001cd5b60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 25 13:18:00.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9179 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 25 13:18:00.921: INFO: rc: 1 Dec 25 13:18:00.921: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9179 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00114e210 exit status 1 true [0xc002970038 0xc002970050 0xc002970068] [0xc002970038 0xc002970050 0xc002970068] [0xc002970048 0xc002970060] [0xba6c50 0xba6c50] 0xc00144f320 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 25 13:18:10.922: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9179 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 25 13:18:11.076: INFO: rc: 1 Dec 25 13:18:11.076: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9179 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00114e300 exit status 1 true [0xc002970070 0xc002970088 0xc0029700a0] [0xc002970070 0xc002970088 0xc0029700a0] [0xc002970080 0xc002970098] [0xba6c50 0xba6c50] 0xc00202b8c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 25 13:18:21.078: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9179 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 25 13:18:21.226: INFO: rc: 1 Dec 25 13:18:21.226: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9179 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00114e090 exit status 1 true [0xc000351d20 0xc000351f20 0xc002970008] [0xc000351d20 0xc000351f20 0xc002970008] [0xc000351ee8 0xc002970000] [0xba6c50 0xba6c50] 0xc001dd36e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 25 13:18:31.227: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9179 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 25 13:18:31.398: INFO: rc: 1 Dec 25 13:18:31.398: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9179 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0013e20f0 exit status 1 true [0xc00241e000 0xc00241e018 0xc00241e030] [0xc00241e000 0xc00241e018 0xc00241e030] [0xc00241e010 0xc00241e028] [0xba6c50 0xba6c50] 0xc001a71740 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 25 13:18:41.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9179 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 25 13:18:41.528: INFO: rc: 1 Dec 25 13:18:41.528: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9179 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0013e21e0 exit status 1 true [0xc00241e038 0xc00241e050 0xc00241e068] [0xc00241e038 0xc00241e050 0xc00241e068] [0xc00241e048 0xc00241e060] [0xba6c50 0xba6c50] 0xc00144e540 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 25 13:18:51.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9179 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 25 13:18:51.681: INFO: rc: 1 Dec 25 13:18:51.681: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9179 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001eb00f0 exit status 1 true [0xc00037c108 0xc00037cac0 0xc00037cb70] [0xc00037c108 0xc00037cac0 0xc00037cb70] [0xc00037cab0 0xc00037cb28] [0xba6c50 0xba6c50] 0xc001a37620 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 25 13:19:01.682: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9179 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 25 13:19:01.829: INFO: rc: 1 Dec 25 13:19:01.830: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9179 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001eb01e0 exit status 1 true [0xc00037cbc8 0xc00037d358 0xc00037d5c8] [0xc00037cbc8 0xc00037d358 0xc00037d5c8] [0xc00037d338 0xc00037d5b8] [0xba6c50 0xba6c50] 0xc00202bda0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 25 13:19:11.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9179 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 25 13:19:11.961: INFO: rc: 1 Dec 25 13:19:11.962: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9179 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001eb02a0 exit status 1 true [0xc00037d5f8 0xc00037d6a8 0xc00037d720] [0xc00037d5f8 0xc00037d6a8 0xc00037d720] [0xc00037d658 0xc00037d6e8] [0xba6c50 0xba6c50] 0xc001cd5740 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 25 13:19:21.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9179 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 25 13:19:22.165: INFO: rc: 1 Dec 25 13:19:22.165: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9179 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00114e240 exit status 1 true [0xc002970010 0xc002970028 0xc002970040] [0xc002970010 0xc002970028 0xc002970040] [0xc002970020 0xc002970038] [0xba6c50 0xba6c50] 0xc002d70540 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 25 13:19:32.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9179 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 25 13:19:32.313: INFO: rc: 1 Dec 25 13:19:32.313: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9179 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0013e2300 exit status 1 true [0xc00241e070 0xc00241e088 0xc00241e0a0] [0xc00241e070 0xc00241e088 0xc00241e0a0] [0xc00241e080 0xc00241e098] [0xba6c50 0xba6c50] 0xc00144ef60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 25 13:19:42.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9179 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 25 13:19:42.486: INFO: rc: 1 Dec 25 13:19:42.486: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9179 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0013e23c0 exit status 1 true [0xc00241e0a8 0xc00241e0c0 0xc00241e0d8] [0xc00241e0a8 0xc00241e0c0 0xc00241e0d8] [0xc00241e0b8 0xc00241e0d0] [0xba6c50 0xba6c50] 0xc00144fe00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 25 13:19:52.488: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9179 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 25 13:19:52.623: INFO: rc: 1 Dec 25 13:19:52.623: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9179 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0013e24b0 exit status 1 true [0xc00241e0e0 0xc00241e0f8 0xc00241e110] [0xc00241e0e0 0xc00241e0f8 0xc00241e110] [0xc00241e0f0 0xc00241e108] [0xba6c50 0xba6c50] 0xc0020aa3c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 25 13:20:02.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9179 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 25 13:20:02.765: INFO: rc: 1 Dec 25 13:20:02.765: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9179 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001eb03c0 exit status 1 true [0xc00037d790 0xc00037d828 0xc00037d870] [0xc00037d790 0xc00037d828 0xc00037d870] [0xc00037d808 0xc00037d838] [0xba6c50 0xba6c50] 0xc001cd5ce0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 25 13:20:12.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9179 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 25 13:20:12.978: INFO: rc: 1 Dec 25 13:20:12.979: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9179 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0013e2570 exit status 1 true [0xc00241e118 0xc00241e130 0xc00241e148] [0xc00241e118 0xc00241e130 0xc00241e148] [0xc00241e128 0xc00241e140] [0xba6c50 0xba6c50] 0xc0020abf20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 25 13:20:22.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9179 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 25 13:20:23.100: INFO: rc: 1 Dec 25 13:20:23.100: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9179 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001eb0090 exit status 1 true [0xc000351d20 0xc000351f20 0xc00037c7a0] [0xc000351d20 0xc000351f20 0xc00037c7a0] [0xc000351ee8 0xc00037c108] [0xba6c50 0xba6c50] 0xc00202a840 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 25 13:20:33.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9179 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 25 13:20:33.253: INFO: rc: 1 Dec 25 13:20:33.253: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9179 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0013e2090 exit status 1 true [0xc002970000 0xc002970018 0xc002970030] [0xc002970000 0xc002970018 0xc002970030] [0xc002970010 0xc002970028] [0xba6c50 0xba6c50] 0xc001a37620 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 25 13:20:43.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9179 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 25 13:20:43.417: INFO: rc: 1 Dec 25 13:20:43.417: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: Dec 25 13:20:43.417: INFO: Scaling statefulset ss to 0 Dec 25 13:20:43.435: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Dec 25 13:20:43.438: INFO: Deleting all statefulset in ns statefulset-9179 Dec 25 13:20:43.441: INFO: Scaling statefulset ss to 0 Dec 25 13:20:43.453: INFO: Waiting for statefulset status.replicas updated to 0 Dec 25 13:20:43.457: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:20:43.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9179" for this suite. Dec 25 13:20:49.628: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:20:49.801: INFO: namespace statefulset-9179 deletion completed in 6.245449389s • [SLOW TEST:367.332 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:20:49.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-a18e5794-93f2-44e0-a7c2-576cfee90ee9 STEP: Creating a pod to test consume configMaps Dec 25 13:20:49.976: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f877da82-c212-4661-b67f-5e985322c6ea" in namespace "projected-1474" to be "success or failure" Dec 25 13:20:50.055: INFO: Pod "pod-projected-configmaps-f877da82-c212-4661-b67f-5e985322c6ea": Phase="Pending", Reason="", readiness=false. Elapsed: 78.724084ms Dec 25 13:20:52.067: INFO: Pod "pod-projected-configmaps-f877da82-c212-4661-b67f-5e985322c6ea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090723768s Dec 25 13:20:54.078: INFO: Pod "pod-projected-configmaps-f877da82-c212-4661-b67f-5e985322c6ea": Phase="Pending", Reason="", readiness=false. Elapsed: 4.10228989s Dec 25 13:20:56.087: INFO: Pod "pod-projected-configmaps-f877da82-c212-4661-b67f-5e985322c6ea": Phase="Pending", Reason="", readiness=false. Elapsed: 6.110454089s Dec 25 13:20:58.097: INFO: Pod "pod-projected-configmaps-f877da82-c212-4661-b67f-5e985322c6ea": Phase="Pending", Reason="", readiness=false. Elapsed: 8.120854794s Dec 25 13:21:00.107: INFO: Pod "pod-projected-configmaps-f877da82-c212-4661-b67f-5e985322c6ea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.13111491s STEP: Saw pod success Dec 25 13:21:00.107: INFO: Pod "pod-projected-configmaps-f877da82-c212-4661-b67f-5e985322c6ea" satisfied condition "success or failure" Dec 25 13:21:00.111: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-f877da82-c212-4661-b67f-5e985322c6ea container projected-configmap-volume-test: STEP: delete the pod Dec 25 13:21:00.187: INFO: Waiting for pod pod-projected-configmaps-f877da82-c212-4661-b67f-5e985322c6ea to disappear Dec 25 13:21:00.209: INFO: Pod pod-projected-configmaps-f877da82-c212-4661-b67f-5e985322c6ea no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:21:00.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1474" for this suite. Dec 25 13:21:06.255: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:21:06.410: INFO: namespace projected-1474 deletion completed in 6.187008883s • [SLOW TEST:16.608 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:21:06.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with configMap that has name projected-configmap-test-upd-f4767aae-d989-45cb-bce3-8891fb4ddcda STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-f4767aae-d989-45cb-bce3-8891fb4ddcda STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:21:18.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9394" for this suite. Dec 25 13:21:38.752: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:21:38.899: INFO: namespace projected-9394 deletion completed in 20.169555899s • [SLOW TEST:32.489 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:21:38.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-814e9077-3a27-44cd-873b-d648a9eb8ec2 STEP: Creating a pod to test consume configMaps Dec 25 13:21:38.987: INFO: Waiting up to 5m0s for pod "pod-configmaps-f6b6b1fe-224b-4d78-ab25-ce0e0955d076" in namespace "configmap-9295" to be "success or failure" Dec 25 13:21:38.994: INFO: Pod "pod-configmaps-f6b6b1fe-224b-4d78-ab25-ce0e0955d076": Phase="Pending", Reason="", readiness=false. Elapsed: 6.675652ms Dec 25 13:21:41.002: INFO: Pod "pod-configmaps-f6b6b1fe-224b-4d78-ab25-ce0e0955d076": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014888159s Dec 25 13:21:43.012: INFO: Pod "pod-configmaps-f6b6b1fe-224b-4d78-ab25-ce0e0955d076": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024796189s Dec 25 13:21:45.020: INFO: Pod "pod-configmaps-f6b6b1fe-224b-4d78-ab25-ce0e0955d076": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032965497s Dec 25 13:21:47.029: INFO: Pod "pod-configmaps-f6b6b1fe-224b-4d78-ab25-ce0e0955d076": Phase="Running", Reason="", readiness=true. Elapsed: 8.041156546s Dec 25 13:21:49.041: INFO: Pod "pod-configmaps-f6b6b1fe-224b-4d78-ab25-ce0e0955d076": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.053195114s STEP: Saw pod success Dec 25 13:21:49.041: INFO: Pod "pod-configmaps-f6b6b1fe-224b-4d78-ab25-ce0e0955d076" satisfied condition "success or failure" Dec 25 13:21:49.045: INFO: Trying to get logs from node iruya-node pod pod-configmaps-f6b6b1fe-224b-4d78-ab25-ce0e0955d076 container configmap-volume-test: STEP: delete the pod Dec 25 13:21:49.134: INFO: Waiting for pod pod-configmaps-f6b6b1fe-224b-4d78-ab25-ce0e0955d076 to disappear Dec 25 13:21:49.140: INFO: Pod pod-configmaps-f6b6b1fe-224b-4d78-ab25-ce0e0955d076 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:21:49.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9295" for this suite. Dec 25 13:21:55.173: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:21:55.313: INFO: namespace configmap-9295 deletion completed in 6.168061067s • [SLOW TEST:16.413 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:21:55.314: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W1225 13:22:05.538090 9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Dec 25 13:22:05.538: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:22:05.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-146" for this suite. Dec 25 13:22:11.583: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:22:11.742: INFO: namespace gc-146 deletion completed in 6.197607447s • [SLOW TEST:16.428 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:22:11.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Dec 25 13:22:11.820: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-2263' Dec 25 13:22:12.007: INFO: stderr: "" Dec 25 13:22:12.007: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690 Dec 25 13:22:12.068: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-2263' Dec 25 13:22:16.601: INFO: stderr: "" Dec 25 13:22:16.602: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:22:16.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2263" for this suite. Dec 25 13:22:22.683: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:22:22.824: INFO: namespace kubectl-2263 deletion completed in 6.163506099s • [SLOW TEST:11.082 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:22:22.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Dec 25 13:22:23.016: INFO: Waiting up to 5m0s for pod "downward-api-37b135f8-344e-4bac-81a7-41868bff1d56" in namespace "downward-api-2602" to be "success or failure" Dec 25 13:22:23.027: INFO: Pod "downward-api-37b135f8-344e-4bac-81a7-41868bff1d56": Phase="Pending", Reason="", readiness=false. Elapsed: 10.13885ms Dec 25 13:22:25.036: INFO: Pod "downward-api-37b135f8-344e-4bac-81a7-41868bff1d56": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019835111s Dec 25 13:22:27.053: INFO: Pod "downward-api-37b135f8-344e-4bac-81a7-41868bff1d56": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036313453s Dec 25 13:22:29.077: INFO: Pod "downward-api-37b135f8-344e-4bac-81a7-41868bff1d56": Phase="Pending", Reason="", readiness=false. Elapsed: 6.060040924s Dec 25 13:22:31.091: INFO: Pod "downward-api-37b135f8-344e-4bac-81a7-41868bff1d56": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.074292645s STEP: Saw pod success Dec 25 13:22:31.091: INFO: Pod "downward-api-37b135f8-344e-4bac-81a7-41868bff1d56" satisfied condition "success or failure" Dec 25 13:22:31.102: INFO: Trying to get logs from node iruya-node pod downward-api-37b135f8-344e-4bac-81a7-41868bff1d56 container dapi-container: STEP: delete the pod Dec 25 13:22:31.195: INFO: Waiting for pod downward-api-37b135f8-344e-4bac-81a7-41868bff1d56 to disappear Dec 25 13:22:31.205: INFO: Pod downward-api-37b135f8-344e-4bac-81a7-41868bff1d56 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:22:31.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2602" for this suite. Dec 25 13:22:37.282: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:22:37.452: INFO: namespace downward-api-2602 deletion completed in 6.239724013s • [SLOW TEST:14.628 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:22:37.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Dec 25 13:22:37.538: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:22:50.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7506" for this suite. Dec 25 13:22:56.948: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:22:57.031: INFO: namespace init-container-7506 deletion completed in 6.11169458s • [SLOW TEST:19.579 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:22:57.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 25 13:22:57.101: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7e0a05ec-04bb-474b-bf96-754b55f13f02" in namespace "downward-api-4650" to be "success or failure" Dec 25 13:22:57.106: INFO: Pod "downwardapi-volume-7e0a05ec-04bb-474b-bf96-754b55f13f02": Phase="Pending", Reason="", readiness=false. Elapsed: 5.30099ms Dec 25 13:22:59.121: INFO: Pod "downwardapi-volume-7e0a05ec-04bb-474b-bf96-754b55f13f02": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020811409s Dec 25 13:23:01.151: INFO: Pod "downwardapi-volume-7e0a05ec-04bb-474b-bf96-754b55f13f02": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050448542s Dec 25 13:23:03.163: INFO: Pod "downwardapi-volume-7e0a05ec-04bb-474b-bf96-754b55f13f02": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062218149s Dec 25 13:23:05.175: INFO: Pod "downwardapi-volume-7e0a05ec-04bb-474b-bf96-754b55f13f02": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.074800231s STEP: Saw pod success Dec 25 13:23:05.176: INFO: Pod "downwardapi-volume-7e0a05ec-04bb-474b-bf96-754b55f13f02" satisfied condition "success or failure" Dec 25 13:23:05.179: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-7e0a05ec-04bb-474b-bf96-754b55f13f02 container client-container: STEP: delete the pod Dec 25 13:23:05.260: INFO: Waiting for pod downwardapi-volume-7e0a05ec-04bb-474b-bf96-754b55f13f02 to disappear Dec 25 13:23:05.279: INFO: Pod downwardapi-volume-7e0a05ec-04bb-474b-bf96-754b55f13f02 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:23:05.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4650" for this suite. Dec 25 13:23:11.313: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:23:11.456: INFO: namespace downward-api-4650 deletion completed in 6.168967178s • [SLOW TEST:14.424 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:23:11.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-7103/configmap-test-43f536b8-8a24-49be-a68e-0da95011460b STEP: Creating a pod to test consume configMaps Dec 25 13:23:11.732: INFO: Waiting up to 5m0s for pod "pod-configmaps-48797886-8ebe-4fc1-9333-8098acb5efbe" in namespace "configmap-7103" to be "success or failure" Dec 25 13:23:11.741: INFO: Pod "pod-configmaps-48797886-8ebe-4fc1-9333-8098acb5efbe": Phase="Pending", Reason="", readiness=false. Elapsed: 8.381417ms Dec 25 13:23:13.755: INFO: Pod "pod-configmaps-48797886-8ebe-4fc1-9333-8098acb5efbe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021923465s Dec 25 13:23:15.767: INFO: Pod "pod-configmaps-48797886-8ebe-4fc1-9333-8098acb5efbe": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034402247s Dec 25 13:23:17.790: INFO: Pod "pod-configmaps-48797886-8ebe-4fc1-9333-8098acb5efbe": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057451768s Dec 25 13:23:19.799: INFO: Pod "pod-configmaps-48797886-8ebe-4fc1-9333-8098acb5efbe": Phase="Pending", Reason="", readiness=false. Elapsed: 8.066466787s Dec 25 13:23:21.811: INFO: Pod "pod-configmaps-48797886-8ebe-4fc1-9333-8098acb5efbe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.078727531s STEP: Saw pod success Dec 25 13:23:21.812: INFO: Pod "pod-configmaps-48797886-8ebe-4fc1-9333-8098acb5efbe" satisfied condition "success or failure" Dec 25 13:23:21.816: INFO: Trying to get logs from node iruya-node pod pod-configmaps-48797886-8ebe-4fc1-9333-8098acb5efbe container env-test: STEP: delete the pod Dec 25 13:23:21.893: INFO: Waiting for pod pod-configmaps-48797886-8ebe-4fc1-9333-8098acb5efbe to disappear Dec 25 13:23:21.898: INFO: Pod pod-configmaps-48797886-8ebe-4fc1-9333-8098acb5efbe no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:23:21.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7103" for this suite. Dec 25 13:23:27.921: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:23:28.064: INFO: namespace configmap-7103 deletion completed in 6.162193384s • [SLOW TEST:16.608 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:23:28.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-xv9d STEP: Creating a pod to test atomic-volume-subpath Dec 25 13:23:28.211: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-xv9d" in namespace "subpath-6060" to be "success or failure" Dec 25 13:23:28.231: INFO: Pod "pod-subpath-test-configmap-xv9d": Phase="Pending", Reason="", readiness=false. Elapsed: 20.182405ms Dec 25 13:23:30.247: INFO: Pod "pod-subpath-test-configmap-xv9d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036269202s Dec 25 13:23:32.258: INFO: Pod "pod-subpath-test-configmap-xv9d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047354095s Dec 25 13:23:34.271: INFO: Pod "pod-subpath-test-configmap-xv9d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059951829s Dec 25 13:23:36.280: INFO: Pod "pod-subpath-test-configmap-xv9d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.069583293s Dec 25 13:23:38.288: INFO: Pod "pod-subpath-test-configmap-xv9d": Phase="Running", Reason="", readiness=true. Elapsed: 10.077587915s Dec 25 13:23:40.299: INFO: Pod "pod-subpath-test-configmap-xv9d": Phase="Running", Reason="", readiness=true. Elapsed: 12.088075611s Dec 25 13:23:42.311: INFO: Pod "pod-subpath-test-configmap-xv9d": Phase="Running", Reason="", readiness=true. Elapsed: 14.10058228s Dec 25 13:23:44.321: INFO: Pod "pod-subpath-test-configmap-xv9d": Phase="Running", Reason="", readiness=true. Elapsed: 16.110087407s Dec 25 13:23:46.332: INFO: Pod "pod-subpath-test-configmap-xv9d": Phase="Running", Reason="", readiness=true. Elapsed: 18.121497399s Dec 25 13:23:48.349: INFO: Pod "pod-subpath-test-configmap-xv9d": Phase="Running", Reason="", readiness=true. Elapsed: 20.138532093s Dec 25 13:23:50.360: INFO: Pod "pod-subpath-test-configmap-xv9d": Phase="Running", Reason="", readiness=true. Elapsed: 22.149591383s Dec 25 13:23:52.376: INFO: Pod "pod-subpath-test-configmap-xv9d": Phase="Running", Reason="", readiness=true. Elapsed: 24.165412221s Dec 25 13:23:54.385: INFO: Pod "pod-subpath-test-configmap-xv9d": Phase="Running", Reason="", readiness=true. Elapsed: 26.174141811s Dec 25 13:23:56.392: INFO: Pod "pod-subpath-test-configmap-xv9d": Phase="Running", Reason="", readiness=true. Elapsed: 28.180822773s Dec 25 13:23:58.401: INFO: Pod "pod-subpath-test-configmap-xv9d": Phase="Running", Reason="", readiness=true. Elapsed: 30.190352312s Dec 25 13:24:00.410: INFO: Pod "pod-subpath-test-configmap-xv9d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.198916127s STEP: Saw pod success Dec 25 13:24:00.410: INFO: Pod "pod-subpath-test-configmap-xv9d" satisfied condition "success or failure" Dec 25 13:24:00.414: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-xv9d container test-container-subpath-configmap-xv9d: STEP: delete the pod Dec 25 13:24:00.629: INFO: Waiting for pod pod-subpath-test-configmap-xv9d to disappear Dec 25 13:24:00.667: INFO: Pod pod-subpath-test-configmap-xv9d no longer exists STEP: Deleting pod pod-subpath-test-configmap-xv9d Dec 25 13:24:00.667: INFO: Deleting pod "pod-subpath-test-configmap-xv9d" in namespace "subpath-6060" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:24:00.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6060" for this suite. Dec 25 13:24:06.705: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:24:06.963: INFO: namespace subpath-6060 deletion completed in 6.283394021s • [SLOW TEST:38.898 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:24:06.964: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Dec 25 13:24:07.096: INFO: Waiting up to 5m0s for pod "pod-d6b1df25-a609-4a59-b214-3bf4c398539f" in namespace "emptydir-7790" to be "success or failure" Dec 25 13:24:07.108: INFO: Pod "pod-d6b1df25-a609-4a59-b214-3bf4c398539f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.09778ms Dec 25 13:24:09.122: INFO: Pod "pod-d6b1df25-a609-4a59-b214-3bf4c398539f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025848472s Dec 25 13:24:11.191: INFO: Pod "pod-d6b1df25-a609-4a59-b214-3bf4c398539f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.095241073s Dec 25 13:24:13.204: INFO: Pod "pod-d6b1df25-a609-4a59-b214-3bf4c398539f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.108478025s Dec 25 13:24:15.991: INFO: Pod "pod-d6b1df25-a609-4a59-b214-3bf4c398539f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.894726215s Dec 25 13:24:18.006: INFO: Pod "pod-d6b1df25-a609-4a59-b214-3bf4c398539f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.910093582s STEP: Saw pod success Dec 25 13:24:18.006: INFO: Pod "pod-d6b1df25-a609-4a59-b214-3bf4c398539f" satisfied condition "success or failure" Dec 25 13:24:18.011: INFO: Trying to get logs from node iruya-node pod pod-d6b1df25-a609-4a59-b214-3bf4c398539f container test-container: STEP: delete the pod Dec 25 13:24:18.676: INFO: Waiting for pod pod-d6b1df25-a609-4a59-b214-3bf4c398539f to disappear Dec 25 13:24:18.700: INFO: Pod pod-d6b1df25-a609-4a59-b214-3bf4c398539f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:24:18.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7790" for this suite. Dec 25 13:24:24.799: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:24:24.921: INFO: namespace emptydir-7790 deletion completed in 6.16007695s • [SLOW TEST:17.957 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:24:24.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 25 13:24:25.085: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d035df0c-4acb-4bbc-8c28-927b5c6811aa" in namespace "projected-918" to be "success or failure" Dec 25 13:24:25.104: INFO: Pod "downwardapi-volume-d035df0c-4acb-4bbc-8c28-927b5c6811aa": Phase="Pending", Reason="", readiness=false. Elapsed: 18.886694ms Dec 25 13:24:27.114: INFO: Pod "downwardapi-volume-d035df0c-4acb-4bbc-8c28-927b5c6811aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028847166s Dec 25 13:24:29.197: INFO: Pod "downwardapi-volume-d035df0c-4acb-4bbc-8c28-927b5c6811aa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.112281614s Dec 25 13:24:31.209: INFO: Pod "downwardapi-volume-d035df0c-4acb-4bbc-8c28-927b5c6811aa": Phase="Pending", Reason="", readiness=false. Elapsed: 6.123553115s Dec 25 13:24:33.229: INFO: Pod "downwardapi-volume-d035df0c-4acb-4bbc-8c28-927b5c6811aa": Phase="Pending", Reason="", readiness=false. Elapsed: 8.143683687s Dec 25 13:24:35.235: INFO: Pod "downwardapi-volume-d035df0c-4acb-4bbc-8c28-927b5c6811aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.149888748s STEP: Saw pod success Dec 25 13:24:35.235: INFO: Pod "downwardapi-volume-d035df0c-4acb-4bbc-8c28-927b5c6811aa" satisfied condition "success or failure" Dec 25 13:24:35.240: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-d035df0c-4acb-4bbc-8c28-927b5c6811aa container client-container: STEP: delete the pod Dec 25 13:24:35.305: INFO: Waiting for pod downwardapi-volume-d035df0c-4acb-4bbc-8c28-927b5c6811aa to disappear Dec 25 13:24:35.330: INFO: Pod downwardapi-volume-d035df0c-4acb-4bbc-8c28-927b5c6811aa no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:24:35.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-918" for this suite. Dec 25 13:24:41.374: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:24:41.530: INFO: namespace projected-918 deletion completed in 6.188941472s • [SLOW TEST:16.609 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:24:41.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-1c71c5e4-d5dd-4d14-bd4a-1aa0f521f0cc STEP: Creating configMap with name cm-test-opt-upd-0fd543e5-314e-4943-a08c-db8c556d2286 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-1c71c5e4-d5dd-4d14-bd4a-1aa0f521f0cc STEP: Updating configmap cm-test-opt-upd-0fd543e5-314e-4943-a08c-db8c556d2286 STEP: Creating configMap with name cm-test-opt-create-4c9a93ec-f75c-4d5a-9587-2403afb23ff7 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:25:57.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7227" for this suite. Dec 25 13:26:19.221: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:26:19.340: INFO: namespace projected-7227 deletion completed in 22.142990688s • [SLOW TEST:97.809 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:26:19.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Dec 25 13:26:27.562: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-a4400fa0-2a87-4ac4-8ade-e4fffd06a52f,GenerateName:,Namespace:events-998,SelfLink:/api/v1/namespaces/events-998/pods/send-events-a4400fa0-2a87-4ac4-8ade-e4fffd06a52f,UID:e115f1ab-948e-4654-8999-4b51d491e36c,ResourceVersion:18016233,Generation:0,CreationTimestamp:2019-12-25 13:26:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 501305388,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9mgpq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9mgpq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-9mgpq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023cc3d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023cc3f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:26:19 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:26:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:26:27 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:26:19 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2019-12-25 13:26:19 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2019-12-25 13:26:26 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://8951df125daf2873ad08a0ace9a38e78bb1d76317c9ff375b37faed1dcdf26b7}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Dec 25 13:26:29.574: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Dec 25 13:26:31.588: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:26:31.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-998" for this suite. Dec 25 13:27:09.686: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:27:09.841: INFO: namespace events-998 deletion completed in 38.219522367s • [SLOW TEST:50.500 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:27:09.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service endpoint-test2 in namespace services-5667 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5667 to expose endpoints map[] Dec 25 13:27:10.272: INFO: Get endpoints failed (15.92471ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Dec 25 13:27:11.281: INFO: successfully validated that service endpoint-test2 in namespace services-5667 exposes endpoints map[] (1.024797109s elapsed) STEP: Creating pod pod1 in namespace services-5667 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5667 to expose endpoints map[pod1:[80]] Dec 25 13:27:15.441: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.142822149s elapsed, will retry) Dec 25 13:27:20.585: INFO: successfully validated that service endpoint-test2 in namespace services-5667 exposes endpoints map[pod1:[80]] (9.286742899s elapsed) STEP: Creating pod pod2 in namespace services-5667 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5667 to expose endpoints map[pod1:[80] pod2:[80]] Dec 25 13:27:25.628: INFO: Unexpected endpoints: found map[d204b9a2-89df-4b00-b049-bbe119cc078b:[80]], expected map[pod1:[80] pod2:[80]] (5.029430653s elapsed, will retry) Dec 25 13:27:28.707: INFO: successfully validated that service endpoint-test2 in namespace services-5667 exposes endpoints map[pod1:[80] pod2:[80]] (8.108113166s elapsed) STEP: Deleting pod pod1 in namespace services-5667 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5667 to expose endpoints map[pod2:[80]] Dec 25 13:27:28.799: INFO: successfully validated that service endpoint-test2 in namespace services-5667 exposes endpoints map[pod2:[80]] (61.03197ms elapsed) STEP: Deleting pod pod2 in namespace services-5667 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5667 to expose endpoints map[] Dec 25 13:27:29.849: INFO: successfully validated that service endpoint-test2 in namespace services-5667 exposes endpoints map[] (1.040398252s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:27:30.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5667" for this suite. Dec 25 13:27:53.178: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:27:53.291: INFO: namespace services-5667 deletion completed in 22.379495557s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:43.448 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:27:53.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Dec 25 13:28:09.524: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Dec 25 13:28:09.534: INFO: Pod pod-with-prestop-http-hook still exists Dec 25 13:28:11.535: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Dec 25 13:28:11.548: INFO: Pod pod-with-prestop-http-hook still exists Dec 25 13:28:13.535: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Dec 25 13:28:13.548: INFO: Pod pod-with-prestop-http-hook still exists Dec 25 13:28:15.535: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Dec 25 13:28:15.546: INFO: Pod pod-with-prestop-http-hook still exists Dec 25 13:28:17.535: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Dec 25 13:28:17.546: INFO: Pod pod-with-prestop-http-hook still exists Dec 25 13:28:19.535: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Dec 25 13:28:19.545: INFO: Pod pod-with-prestop-http-hook still exists Dec 25 13:28:21.535: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Dec 25 13:28:21.546: INFO: Pod pod-with-prestop-http-hook still exists Dec 25 13:28:23.535: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Dec 25 13:28:23.545: INFO: Pod pod-with-prestop-http-hook still exists Dec 25 13:28:25.535: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Dec 25 13:28:25.547: INFO: Pod pod-with-prestop-http-hook still exists Dec 25 13:28:27.536: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Dec 25 13:28:27.554: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:28:27.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-988" for this suite. Dec 25 13:28:51.636: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:28:51.841: INFO: namespace container-lifecycle-hook-988 deletion completed in 24.23284983s • [SLOW TEST:58.550 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:28:51.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 25 13:28:51.953: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5bea752e-447d-479f-b237-38b65400ceb6" in namespace "projected-8276" to be "success or failure" Dec 25 13:28:51.957: INFO: Pod "downwardapi-volume-5bea752e-447d-479f-b237-38b65400ceb6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.979262ms Dec 25 13:28:53.972: INFO: Pod "downwardapi-volume-5bea752e-447d-479f-b237-38b65400ceb6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018699568s Dec 25 13:28:56.070: INFO: Pod "downwardapi-volume-5bea752e-447d-479f-b237-38b65400ceb6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116834716s Dec 25 13:28:58.077: INFO: Pod "downwardapi-volume-5bea752e-447d-479f-b237-38b65400ceb6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.12436955s Dec 25 13:29:00.086: INFO: Pod "downwardapi-volume-5bea752e-447d-479f-b237-38b65400ceb6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.133298848s Dec 25 13:29:02.100: INFO: Pod "downwardapi-volume-5bea752e-447d-479f-b237-38b65400ceb6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.146890967s STEP: Saw pod success Dec 25 13:29:02.100: INFO: Pod "downwardapi-volume-5bea752e-447d-479f-b237-38b65400ceb6" satisfied condition "success or failure" Dec 25 13:29:02.104: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-5bea752e-447d-479f-b237-38b65400ceb6 container client-container: STEP: delete the pod Dec 25 13:29:02.283: INFO: Waiting for pod downwardapi-volume-5bea752e-447d-479f-b237-38b65400ceb6 to disappear Dec 25 13:29:02.300: INFO: Pod downwardapi-volume-5bea752e-447d-479f-b237-38b65400ceb6 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:29:02.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8276" for this suite. Dec 25 13:29:08.390: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:29:08.538: INFO: namespace projected-8276 deletion completed in 6.214581852s • [SLOW TEST:16.697 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:29:08.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:29:15.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-829" for this suite. Dec 25 13:29:21.099: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:29:21.230: INFO: namespace namespaces-829 deletion completed in 6.151420854s STEP: Destroying namespace "nsdeletetest-5245" for this suite. Dec 25 13:29:21.237: INFO: Namespace nsdeletetest-5245 was already deleted STEP: Destroying namespace "nsdeletetest-174" for this suite. Dec 25 13:29:27.266: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:29:27.460: INFO: namespace nsdeletetest-174 deletion completed in 6.222235914s • [SLOW TEST:18.921 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:29:27.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Dec 25 13:29:37.287: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:29:37.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-166" for this suite. Dec 25 13:29:43.390: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:29:43.517: INFO: namespace container-runtime-166 deletion completed in 6.155877232s • [SLOW TEST:16.057 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:29:43.519: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating pod Dec 25 13:29:53.806: INFO: Pod pod-hostip-70ccb99d-2ce8-4478-a08b-1c26e97d8fc1 has hostIP: 10.96.3.65 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:29:53.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2639" for this suite. Dec 25 13:30:15.887: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:30:16.029: INFO: namespace pods-2639 deletion completed in 22.18837104s • [SLOW TEST:32.511 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:30:16.030: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Dec 25 13:30:16.109: INFO: PodSpec: initContainers in spec.initContainers Dec 25 13:31:18.285: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-0c639852-ae6f-4cb7-b5b9-f4186ec728bb", GenerateName:"", Namespace:"init-container-189", SelfLink:"/api/v1/namespaces/init-container-189/pods/pod-init-0c639852-ae6f-4cb7-b5b9-f4186ec728bb", UID:"d4ec714c-267e-4302-8301-9c004c764806", ResourceVersion:"18016862", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63712877416, loc:(*time.Location)(0x7ea48a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"109244303"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-qxx25", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002c38a00), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-qxx25", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-qxx25", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-qxx25", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002d5f178), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0028f6360), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002d5f200)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002d5f220)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002d5f228), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002d5f22c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712877416, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712877416, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712877416, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712877416, loc:(*time.Location)(0x7ea48a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.3.65", PodIP:"10.44.0.1", StartTime:(*v1.Time)(0xc00285a4a0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0020bc3f0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0020bc460)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://4c566111240ad0729b9a59736f8f0b87aae3125a5cbbdd812dd7fd9cc858e8eb"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00285a4e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00285a4c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:31:18.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-189" for this suite. Dec 25 13:31:38.361: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:31:38.486: INFO: namespace init-container-189 deletion completed in 20.175579565s • [SLOW TEST:82.456 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:31:38.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the initial replication controller Dec 25 13:31:38.637: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4035' Dec 25 13:31:40.892: INFO: stderr: "" Dec 25 13:31:40.892: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Dec 25 13:31:40.893: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4035' Dec 25 13:31:41.104: INFO: stderr: "" Dec 25 13:31:41.104: INFO: stdout: "update-demo-nautilus-6btgm update-demo-nautilus-nwmst " Dec 25 13:31:41.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6btgm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4035' Dec 25 13:31:41.203: INFO: stderr: "" Dec 25 13:31:41.203: INFO: stdout: "" Dec 25 13:31:41.203: INFO: update-demo-nautilus-6btgm is created but not running Dec 25 13:31:46.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4035' Dec 25 13:31:47.440: INFO: stderr: "" Dec 25 13:31:47.440: INFO: stdout: "update-demo-nautilus-6btgm update-demo-nautilus-nwmst " Dec 25 13:31:47.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6btgm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4035' Dec 25 13:31:47.950: INFO: stderr: "" Dec 25 13:31:47.950: INFO: stdout: "" Dec 25 13:31:47.950: INFO: update-demo-nautilus-6btgm is created but not running Dec 25 13:31:52.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4035' Dec 25 13:31:53.109: INFO: stderr: "" Dec 25 13:31:53.109: INFO: stdout: "update-demo-nautilus-6btgm update-demo-nautilus-nwmst " Dec 25 13:31:53.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6btgm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4035' Dec 25 13:31:53.208: INFO: stderr: "" Dec 25 13:31:53.209: INFO: stdout: "true" Dec 25 13:31:53.209: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6btgm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4035' Dec 25 13:31:53.310: INFO: stderr: "" Dec 25 13:31:53.310: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 25 13:31:53.310: INFO: validating pod update-demo-nautilus-6btgm Dec 25 13:31:53.323: INFO: got data: { "image": "nautilus.jpg" } Dec 25 13:31:53.323: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 25 13:31:53.323: INFO: update-demo-nautilus-6btgm is verified up and running Dec 25 13:31:53.323: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nwmst -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4035' Dec 25 13:31:53.430: INFO: stderr: "" Dec 25 13:31:53.431: INFO: stdout: "true" Dec 25 13:31:53.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nwmst -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4035' Dec 25 13:31:53.537: INFO: stderr: "" Dec 25 13:31:53.538: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 25 13:31:53.538: INFO: validating pod update-demo-nautilus-nwmst Dec 25 13:31:53.567: INFO: got data: { "image": "nautilus.jpg" } Dec 25 13:31:53.567: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 25 13:31:53.567: INFO: update-demo-nautilus-nwmst is verified up and running STEP: rolling-update to new replication controller Dec 25 13:31:53.569: INFO: scanned /root for discovery docs: Dec 25 13:31:53.569: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-4035' Dec 25 13:32:25.134: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Dec 25 13:32:25.135: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Dec 25 13:32:25.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4035' Dec 25 13:32:25.262: INFO: stderr: "" Dec 25 13:32:25.263: INFO: stdout: "update-demo-kitten-dbr94 update-demo-kitten-mtjdz update-demo-nautilus-6btgm " STEP: Replicas for name=update-demo: expected=2 actual=3 Dec 25 13:32:30.264: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4035' Dec 25 13:32:30.420: INFO: stderr: "" Dec 25 13:32:30.420: INFO: stdout: "update-demo-kitten-dbr94 update-demo-kitten-mtjdz " Dec 25 13:32:30.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-dbr94 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4035' Dec 25 13:32:30.538: INFO: stderr: "" Dec 25 13:32:30.538: INFO: stdout: "true" Dec 25 13:32:30.539: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-dbr94 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4035' Dec 25 13:32:30.632: INFO: stderr: "" Dec 25 13:32:30.633: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Dec 25 13:32:30.633: INFO: validating pod update-demo-kitten-dbr94 Dec 25 13:32:30.647: INFO: got data: { "image": "kitten.jpg" } Dec 25 13:32:30.647: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Dec 25 13:32:30.647: INFO: update-demo-kitten-dbr94 is verified up and running Dec 25 13:32:30.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-mtjdz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4035' Dec 25 13:32:30.739: INFO: stderr: "" Dec 25 13:32:30.739: INFO: stdout: "true" Dec 25 13:32:30.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-mtjdz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4035' Dec 25 13:32:30.844: INFO: stderr: "" Dec 25 13:32:30.844: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Dec 25 13:32:30.844: INFO: validating pod update-demo-kitten-mtjdz Dec 25 13:32:30.875: INFO: got data: { "image": "kitten.jpg" } Dec 25 13:32:30.875: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Dec 25 13:32:30.875: INFO: update-demo-kitten-mtjdz is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:32:30.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4035" for this suite. Dec 25 13:32:54.962: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:32:55.050: INFO: namespace kubectl-4035 deletion completed in 24.158870689s • [SLOW TEST:76.564 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:32:55.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Dec 25 13:33:03.229: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Dec 25 13:33:18.413: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:33:18.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9346" for this suite. Dec 25 13:33:24.474: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:33:24.571: INFO: namespace pods-9346 deletion completed in 6.13036337s • [SLOW TEST:29.520 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:33:24.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 25 13:33:24.699: INFO: Waiting up to 5m0s for pod "downwardapi-volume-33c8cd38-07d2-4771-b38a-2e3f986f60e1" in namespace "projected-6381" to be "success or failure" Dec 25 13:33:24.718: INFO: Pod "downwardapi-volume-33c8cd38-07d2-4771-b38a-2e3f986f60e1": Phase="Pending", Reason="", readiness=false. Elapsed: 18.580245ms Dec 25 13:33:26.729: INFO: Pod "downwardapi-volume-33c8cd38-07d2-4771-b38a-2e3f986f60e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029521428s Dec 25 13:33:28.742: INFO: Pod "downwardapi-volume-33c8cd38-07d2-4771-b38a-2e3f986f60e1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042704774s Dec 25 13:33:30.753: INFO: Pod "downwardapi-volume-33c8cd38-07d2-4771-b38a-2e3f986f60e1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053411285s Dec 25 13:33:32.759: INFO: Pod "downwardapi-volume-33c8cd38-07d2-4771-b38a-2e3f986f60e1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.059652676s Dec 25 13:33:34.770: INFO: Pod "downwardapi-volume-33c8cd38-07d2-4771-b38a-2e3f986f60e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.070917141s STEP: Saw pod success Dec 25 13:33:34.770: INFO: Pod "downwardapi-volume-33c8cd38-07d2-4771-b38a-2e3f986f60e1" satisfied condition "success or failure" Dec 25 13:33:34.778: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-33c8cd38-07d2-4771-b38a-2e3f986f60e1 container client-container: STEP: delete the pod Dec 25 13:33:35.209: INFO: Waiting for pod downwardapi-volume-33c8cd38-07d2-4771-b38a-2e3f986f60e1 to disappear Dec 25 13:33:35.213: INFO: Pod downwardapi-volume-33c8cd38-07d2-4771-b38a-2e3f986f60e1 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:33:35.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6381" for this suite. Dec 25 13:33:41.298: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:33:41.459: INFO: namespace projected-6381 deletion completed in 6.185467669s • [SLOW TEST:16.887 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:33:41.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-9312 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Dec 25 13:33:41.624: INFO: Found 0 stateful pods, waiting for 3 Dec 25 13:33:51.633: INFO: Found 2 stateful pods, waiting for 3 Dec 25 13:34:01.682: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Dec 25 13:34:01.682: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Dec 25 13:34:01.682: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Dec 25 13:34:11.635: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Dec 25 13:34:11.635: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Dec 25 13:34:11.635: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Dec 25 13:34:11.684: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Dec 25 13:34:21.784: INFO: Updating stateful set ss2 Dec 25 13:34:21.837: INFO: Waiting for Pod statefulset-9312/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 25 13:34:31.859: INFO: Waiting for Pod statefulset-9312/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Dec 25 13:34:42.242: INFO: Found 2 stateful pods, waiting for 3 Dec 25 13:34:52.259: INFO: Found 2 stateful pods, waiting for 3 Dec 25 13:35:02.249: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Dec 25 13:35:02.249: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Dec 25 13:35:02.249: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Dec 25 13:35:02.271: INFO: Updating stateful set ss2 Dec 25 13:35:02.433: INFO: Waiting for Pod statefulset-9312/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 25 13:35:12.451: INFO: Waiting for Pod statefulset-9312/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 25 13:35:22.481: INFO: Updating stateful set ss2 Dec 25 13:35:22.531: INFO: Waiting for StatefulSet statefulset-9312/ss2 to complete update Dec 25 13:35:22.532: INFO: Waiting for Pod statefulset-9312/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 25 13:35:32.604: INFO: Waiting for StatefulSet statefulset-9312/ss2 to complete update Dec 25 13:35:32.605: INFO: Waiting for Pod statefulset-9312/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 25 13:35:42.654: INFO: Waiting for StatefulSet statefulset-9312/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Dec 25 13:35:52.568: INFO: Deleting all statefulset in ns statefulset-9312 Dec 25 13:35:52.580: INFO: Scaling statefulset ss2 to 0 Dec 25 13:36:22.666: INFO: Waiting for statefulset status.replicas updated to 0 Dec 25 13:36:22.670: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:36:22.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9312" for this suite. Dec 25 13:36:30.785: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:36:30.938: INFO: namespace statefulset-9312 deletion completed in 8.236725713s • [SLOW TEST:169.478 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:36:30.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Dec 25 13:36:39.192: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:36:39.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1628" for this suite. Dec 25 13:36:45.385: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:36:45.515: INFO: namespace container-runtime-1628 deletion completed in 6.157935783s • [SLOW TEST:14.576 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:36:45.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2045.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2045.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Dec 25 13:36:57.723: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-2045/dns-test-7954e611-2517-4dc7-9c94-c372be22314a: the server could not find the requested resource (get pods dns-test-7954e611-2517-4dc7-9c94-c372be22314a) Dec 25 13:36:57.730: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-2045/dns-test-7954e611-2517-4dc7-9c94-c372be22314a: the server could not find the requested resource (get pods dns-test-7954e611-2517-4dc7-9c94-c372be22314a) Dec 25 13:36:57.736: INFO: Unable to read wheezy_udp@PodARecord from pod dns-2045/dns-test-7954e611-2517-4dc7-9c94-c372be22314a: the server could not find the requested resource (get pods dns-test-7954e611-2517-4dc7-9c94-c372be22314a) Dec 25 13:36:57.741: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-2045/dns-test-7954e611-2517-4dc7-9c94-c372be22314a: the server could not find the requested resource (get pods dns-test-7954e611-2517-4dc7-9c94-c372be22314a) Dec 25 13:36:57.746: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-2045/dns-test-7954e611-2517-4dc7-9c94-c372be22314a: the server could not find the requested resource (get pods dns-test-7954e611-2517-4dc7-9c94-c372be22314a) Dec 25 13:36:57.754: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-2045/dns-test-7954e611-2517-4dc7-9c94-c372be22314a: the server could not find the requested resource (get pods dns-test-7954e611-2517-4dc7-9c94-c372be22314a) Dec 25 13:36:57.759: INFO: Unable to read jessie_udp@PodARecord from pod dns-2045/dns-test-7954e611-2517-4dc7-9c94-c372be22314a: the server could not find the requested resource (get pods dns-test-7954e611-2517-4dc7-9c94-c372be22314a) Dec 25 13:36:57.783: INFO: Unable to read jessie_tcp@PodARecord from pod dns-2045/dns-test-7954e611-2517-4dc7-9c94-c372be22314a: the server could not find the requested resource (get pods dns-test-7954e611-2517-4dc7-9c94-c372be22314a) Dec 25 13:36:57.783: INFO: Lookups using dns-2045/dns-test-7954e611-2517-4dc7-9c94-c372be22314a failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord] Dec 25 13:37:02.856: INFO: DNS probes using dns-2045/dns-test-7954e611-2517-4dc7-9c94-c372be22314a succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:37:03.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2045" for this suite. Dec 25 13:37:09.341: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:37:09.488: INFO: namespace dns-2045 deletion completed in 6.318452474s • [SLOW TEST:23.972 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:37:09.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 25 13:37:09.576: INFO: Waiting up to 5m0s for pod "downwardapi-volume-11c6db93-ae97-4c39-963c-727d2cc974b1" in namespace "projected-4126" to be "success or failure" Dec 25 13:37:09.585: INFO: Pod "downwardapi-volume-11c6db93-ae97-4c39-963c-727d2cc974b1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.988262ms Dec 25 13:37:11.596: INFO: Pod "downwardapi-volume-11c6db93-ae97-4c39-963c-727d2cc974b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020253079s Dec 25 13:37:13.617: INFO: Pod "downwardapi-volume-11c6db93-ae97-4c39-963c-727d2cc974b1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041223649s Dec 25 13:37:15.636: INFO: Pod "downwardapi-volume-11c6db93-ae97-4c39-963c-727d2cc974b1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.060008519s Dec 25 13:37:17.645: INFO: Pod "downwardapi-volume-11c6db93-ae97-4c39-963c-727d2cc974b1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.068564534s Dec 25 13:37:19.656: INFO: Pod "downwardapi-volume-11c6db93-ae97-4c39-963c-727d2cc974b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.079973556s STEP: Saw pod success Dec 25 13:37:19.656: INFO: Pod "downwardapi-volume-11c6db93-ae97-4c39-963c-727d2cc974b1" satisfied condition "success or failure" Dec 25 13:37:19.662: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-11c6db93-ae97-4c39-963c-727d2cc974b1 container client-container: STEP: delete the pod Dec 25 13:37:19.823: INFO: Waiting for pod downwardapi-volume-11c6db93-ae97-4c39-963c-727d2cc974b1 to disappear Dec 25 13:37:19.837: INFO: Pod downwardapi-volume-11c6db93-ae97-4c39-963c-727d2cc974b1 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:37:19.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4126" for this suite. Dec 25 13:37:25.908: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:37:26.083: INFO: namespace projected-4126 deletion completed in 6.23209671s • [SLOW TEST:16.594 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:37:26.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Dec 25 13:37:27.143: INFO: Pod name wrapped-volume-race-b1d8f0ab-eab2-4379-b5f8-3bcf84b836e2: Found 0 pods out of 5 Dec 25 13:37:32.169: INFO: Pod name wrapped-volume-race-b1d8f0ab-eab2-4379-b5f8-3bcf84b836e2: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-b1d8f0ab-eab2-4379-b5f8-3bcf84b836e2 in namespace emptydir-wrapper-8362, will wait for the garbage collector to delete the pods Dec 25 13:38:02.312: INFO: Deleting ReplicationController wrapped-volume-race-b1d8f0ab-eab2-4379-b5f8-3bcf84b836e2 took: 11.979144ms Dec 25 13:38:02.713: INFO: Terminating ReplicationController wrapped-volume-race-b1d8f0ab-eab2-4379-b5f8-3bcf84b836e2 pods took: 401.652189ms STEP: Creating RC which spawns configmap-volume pods Dec 25 13:38:47.050: INFO: Pod name wrapped-volume-race-c268db4b-ec9e-493a-81a7-4d1885e43ceb: Found 0 pods out of 5 Dec 25 13:38:52.110: INFO: Pod name wrapped-volume-race-c268db4b-ec9e-493a-81a7-4d1885e43ceb: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-c268db4b-ec9e-493a-81a7-4d1885e43ceb in namespace emptydir-wrapper-8362, will wait for the garbage collector to delete the pods Dec 25 13:39:26.212: INFO: Deleting ReplicationController wrapped-volume-race-c268db4b-ec9e-493a-81a7-4d1885e43ceb took: 11.736198ms Dec 25 13:39:26.613: INFO: Terminating ReplicationController wrapped-volume-race-c268db4b-ec9e-493a-81a7-4d1885e43ceb pods took: 400.651608ms STEP: Creating RC which spawns configmap-volume pods Dec 25 13:40:16.963: INFO: Pod name wrapped-volume-race-2a9a30e6-d082-46f8-8ef9-7dfa1ceb6d53: Found 0 pods out of 5 Dec 25 13:40:21.976: INFO: Pod name wrapped-volume-race-2a9a30e6-d082-46f8-8ef9-7dfa1ceb6d53: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-2a9a30e6-d082-46f8-8ef9-7dfa1ceb6d53 in namespace emptydir-wrapper-8362, will wait for the garbage collector to delete the pods Dec 25 13:40:58.204: INFO: Deleting ReplicationController wrapped-volume-race-2a9a30e6-d082-46f8-8ef9-7dfa1ceb6d53 took: 22.912338ms Dec 25 13:40:58.505: INFO: Terminating ReplicationController wrapped-volume-race-2a9a30e6-d082-46f8-8ef9-7dfa1ceb6d53 pods took: 301.070528ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:41:47.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-8362" for this suite. Dec 25 13:41:57.546: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:41:57.677: INFO: namespace emptydir-wrapper-8362 deletion completed in 10.16522578s • [SLOW TEST:271.594 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:41:57.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Dec 25 13:41:57.921: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:42:26.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2502" for this suite. Dec 25 13:42:32.622: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:42:32.753: INFO: namespace pods-2502 deletion completed in 6.166925351s • [SLOW TEST:35.076 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:42:32.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:43:32.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6216" for this suite. Dec 25 13:43:54.934: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:43:55.051: INFO: namespace container-probe-6216 deletion completed in 22.1475294s • [SLOW TEST:82.297 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:43:55.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-0c4d2c50-1440-4061-82eb-29dc4b75d6b0 STEP: Creating a pod to test consume configMaps Dec 25 13:43:55.140: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-02a763dc-5a28-4bef-bf4f-197d9be0cd16" in namespace "projected-4035" to be "success or failure" Dec 25 13:43:55.153: INFO: Pod "pod-projected-configmaps-02a763dc-5a28-4bef-bf4f-197d9be0cd16": Phase="Pending", Reason="", readiness=false. Elapsed: 12.320201ms Dec 25 13:43:57.164: INFO: Pod "pod-projected-configmaps-02a763dc-5a28-4bef-bf4f-197d9be0cd16": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023407014s Dec 25 13:43:59.178: INFO: Pod "pod-projected-configmaps-02a763dc-5a28-4bef-bf4f-197d9be0cd16": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037301507s Dec 25 13:44:01.186: INFO: Pod "pod-projected-configmaps-02a763dc-5a28-4bef-bf4f-197d9be0cd16": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045713085s Dec 25 13:44:03.195: INFO: Pod "pod-projected-configmaps-02a763dc-5a28-4bef-bf4f-197d9be0cd16": Phase="Pending", Reason="", readiness=false. Elapsed: 8.054629361s Dec 25 13:44:05.204: INFO: Pod "pod-projected-configmaps-02a763dc-5a28-4bef-bf4f-197d9be0cd16": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.063362708s STEP: Saw pod success Dec 25 13:44:05.204: INFO: Pod "pod-projected-configmaps-02a763dc-5a28-4bef-bf4f-197d9be0cd16" satisfied condition "success or failure" Dec 25 13:44:05.209: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-02a763dc-5a28-4bef-bf4f-197d9be0cd16 container projected-configmap-volume-test: STEP: delete the pod Dec 25 13:44:05.325: INFO: Waiting for pod pod-projected-configmaps-02a763dc-5a28-4bef-bf4f-197d9be0cd16 to disappear Dec 25 13:44:05.329: INFO: Pod pod-projected-configmaps-02a763dc-5a28-4bef-bf4f-197d9be0cd16 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:44:05.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4035" for this suite. Dec 25 13:44:11.359: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:44:11.596: INFO: namespace projected-4035 deletion completed in 6.25969042s • [SLOW TEST:16.545 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:44:11.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Dec 25 13:44:11.702: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-8040' Dec 25 13:44:13.910: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Dec 25 13:44:13.911: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: rolling-update to same image controller Dec 25 13:44:13.940: INFO: scanned /root for discovery docs: Dec 25 13:44:13.941: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-8040' Dec 25 13:44:36.031: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Dec 25 13:44:36.031: INFO: stdout: "Created e2e-test-nginx-rc-5db1c5d69bc4e665b3070e4d616170f0\nScaling up e2e-test-nginx-rc-5db1c5d69bc4e665b3070e4d616170f0 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-5db1c5d69bc4e665b3070e4d616170f0 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-5db1c5d69bc4e665b3070e4d616170f0 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Dec 25 13:44:36.031: INFO: stdout: "Created e2e-test-nginx-rc-5db1c5d69bc4e665b3070e4d616170f0\nScaling up e2e-test-nginx-rc-5db1c5d69bc4e665b3070e4d616170f0 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-5db1c5d69bc4e665b3070e4d616170f0 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-5db1c5d69bc4e665b3070e4d616170f0 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Dec 25 13:44:36.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-8040' Dec 25 13:44:36.197: INFO: stderr: "" Dec 25 13:44:36.197: INFO: stdout: "e2e-test-nginx-rc-5db1c5d69bc4e665b3070e4d616170f0-8qr9t e2e-test-nginx-rc-rgrzd " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Dec 25 13:44:41.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-8040' Dec 25 13:44:41.321: INFO: stderr: "" Dec 25 13:44:41.321: INFO: stdout: "e2e-test-nginx-rc-5db1c5d69bc4e665b3070e4d616170f0-8qr9t " Dec 25 13:44:41.321: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-5db1c5d69bc4e665b3070e4d616170f0-8qr9t -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8040' Dec 25 13:44:41.413: INFO: stderr: "" Dec 25 13:44:41.414: INFO: stdout: "true" Dec 25 13:44:41.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-5db1c5d69bc4e665b3070e4d616170f0-8qr9t -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8040' Dec 25 13:44:41.500: INFO: stderr: "" Dec 25 13:44:41.500: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Dec 25 13:44:41.500: INFO: e2e-test-nginx-rc-5db1c5d69bc4e665b3070e4d616170f0-8qr9t is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522 Dec 25 13:44:41.500: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-8040' Dec 25 13:44:41.634: INFO: stderr: "" Dec 25 13:44:41.634: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:44:41.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8040" for this suite. Dec 25 13:45:03.667: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:45:03.787: INFO: namespace kubectl-8040 deletion completed in 22.146377492s • [SLOW TEST:52.191 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:45:03.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-1125 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-1125 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-1125 Dec 25 13:45:03.962: INFO: Found 0 stateful pods, waiting for 1 Dec 25 13:45:13.971: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Dec 25 13:45:13.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1125 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 25 13:45:14.530: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" Dec 25 13:45:14.530: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 25 13:45:14.530: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 25 13:45:14.546: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Dec 25 13:45:24.565: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Dec 25 13:45:24.565: INFO: Waiting for statefulset status.replicas updated to 0 Dec 25 13:45:24.630: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999996726s Dec 25 13:45:25.638: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.959138685s Dec 25 13:45:26.648: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.951586457s Dec 25 13:45:27.658: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.941138897s Dec 25 13:45:28.666: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.931544303s Dec 25 13:45:29.673: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.922989375s Dec 25 13:45:30.681: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.916251347s Dec 25 13:45:31.690: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.908684183s Dec 25 13:45:32.713: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.898830667s Dec 25 13:45:33.724: INFO: Verifying statefulset ss doesn't scale past 1 for another 876.473225ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-1125 Dec 25 13:45:34.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1125 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 25 13:45:35.338: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n" Dec 25 13:45:35.338: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 25 13:45:35.338: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 25 13:45:35.426: INFO: Found 2 stateful pods, waiting for 3 Dec 25 13:45:45.437: INFO: Found 2 stateful pods, waiting for 3 Dec 25 13:45:55.453: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Dec 25 13:45:55.453: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Dec 25 13:45:55.453: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Dec 25 13:45:55.468: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1125 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 25 13:45:56.062: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" Dec 25 13:45:56.062: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 25 13:45:56.062: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 25 13:45:56.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1125 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 25 13:45:56.443: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" Dec 25 13:45:56.444: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 25 13:45:56.444: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 25 13:45:56.444: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1125 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 25 13:45:57.105: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" Dec 25 13:45:57.105: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 25 13:45:57.105: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 25 13:45:57.105: INFO: Waiting for statefulset status.replicas updated to 0 Dec 25 13:45:57.125: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Dec 25 13:45:57.125: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Dec 25 13:45:57.125: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Dec 25 13:45:57.184: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999775s Dec 25 13:45:58.209: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.955008762s Dec 25 13:45:59.222: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.929785241s Dec 25 13:46:00.236: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.917188231s Dec 25 13:46:01.249: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.902448493s Dec 25 13:46:02.263: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.889742385s Dec 25 13:46:03.277: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.875927841s Dec 25 13:46:04.285: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.862221389s Dec 25 13:46:05.296: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.854111354s Dec 25 13:46:06.314: INFO: Verifying statefulset ss doesn't scale past 3 for another 843.025812ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-1125 Dec 25 13:46:07.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1125 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 25 13:46:07.828: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n" Dec 25 13:46:07.828: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 25 13:46:07.828: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 25 13:46:07.828: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1125 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 25 13:46:08.442: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n" Dec 25 13:46:08.442: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 25 13:46:08.442: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 25 13:46:08.443: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1125 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 25 13:46:08.814: INFO: rc: 126 Dec 25 13:46:08.815: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1125 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] cannot exec in a stopped state: unknown command terminated with exit code 126 [] 0xc002b47980 exit status 126 true [0xc001e16010 0xc001e160c0 0xc001e16158] [0xc001e16010 0xc001e160c0 0xc001e16158] [0xc001e16068 0xc001e16140] [0xba6c50 0xba6c50] 0xc002676d20 }: Command stdout: cannot exec in a stopped state: unknown stderr: command terminated with exit code 126 error: exit status 126 Dec 25 13:46:18.815: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1125 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 25 13:46:18.950: INFO: rc: 1 Dec 25 13:46:18.950: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1125 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002b47b30 exit status 1 true [0xc001e16160 0xc001e161d0 0xc001e161f8] [0xc001e16160 0xc001e161d0 0xc001e161f8] [0xc001e161a8 0xc001e161e8] [0xba6c50 0xba6c50] 0xc002677560 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 25 13:46:28.950: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1125 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 25 13:46:29.204: INFO: rc: 1 Dec 25 13:46:29.204: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1125 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00235c0c0 exit status 1 true [0xc000984208 0xc000984418 0xc0009844e8] [0xc000984208 0xc000984418 0xc0009844e8] [0xc000984398 0xc0009844a0] [0xba6c50 0xba6c50] 0xc0026aede0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 25 13:46:39.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1125 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 25 13:46:39.354: INFO: rc: 1 Dec 25 13:46:39.354: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1125 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0029b00c0 exit status 1 true [0xc00037c108 0xc00037cac0 0xc00037cb70] [0xc00037c108 0xc00037cac0 0xc00037cb70] [0xc00037cab0 0xc00037cb28] [0xba6c50 0xba6c50] 0xc0026ceae0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 25 13:46:49.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1125 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 25 13:46:49.643: INFO: rc: 1 Dec 25 13:46:49.644: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1125 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0029b0180 exit status 1 true [0xc00037cbc8 0xc00037d358 0xc00037d5c8] [0xc00037cbc8 0xc00037d358 0xc00037d5c8] [0xc00037d338 0xc00037d5b8] [0xba6c50 0xba6c50] 0xc0026cfaa0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 25 13:46:59.645: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1125 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 25 13:46:59.840: INFO: rc: 1 Dec 25 13:46:59.840: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1125 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0029b0240 exit status 1 true [0xc00037d5f8 0xc00037d6a8 0xc00037d720] [0xc00037d5f8 0xc00037d6a8 0xc00037d720] [0xc00037d658 0xc00037d6e8] [0xba6c50 0xba6c50] 0xc00194f1a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 25 13:47:09.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1125 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 25 13:47:10.013: INFO: rc: 1 Dec 25 13:47:10.013: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1125 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00235c1b0 exit status 1 true [0xc000984540 0xc000984618 0xc000984b10] [0xc000984540 0xc000984618 0xc000984b10] [0xc0009845a8 0xc000984740] [0xba6c50 0xba6c50] 0xc002ca60c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 25 13:47:20.014: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1125 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 25 13:47:20.806: INFO: rc: 1 Dec 25 13:47:20.807: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1125 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0029b0330 exit status 1 true [0xc00037d790 0xc00037d828 0xc00037d870] [0xc00037d790 0xc00037d828 0xc00037d870] [0xc00037d808 0xc00037d838] [0xba6c50 0xba6c50] 0xc0028f7500 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 25 13:47:30.808: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1125 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 25 13:47:30.974: INFO: rc: 1 Dec 25 13:47:30.974: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1125 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00064d950 exit status 1 true [0xc000351b08 0xc000351ee8 0xc000010050] [0xc000351b08 0xc000351ee8 0xc000010050] [0xc000351e10 0xc000351fd8] [0xba6c50 0xba6c50] 0xc002cfa360 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 25 13:47:40.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1125 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 25 13:47:41.081: INFO: rc: 1 Dec 25 13:47:41.082: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1125 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0029b0420 exit status 1 true [0xc00037d920 0xc00037dbc0 0xc00037dcb0] [0xc00037d920 0xc00037dbc0 0xc00037dcb0] [0xc00037db90 0xc00037dc60] [0xba6c50 0xba6c50] 0xc0028f78c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 25 13:47:51.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1125 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 25 13:47:51.241: INFO: rc: 1 Dec 25 13:47:51.242: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1125 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001f1c090 exit status 1 true [0xc001994008 0xc001994038 0xc001994098] [0xc001994008 0xc001994038 0xc001994098] [0xc001994028 0xc001994048] [0xba6c50 0xba6c50] 0xc002660240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 25 13:48:01.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1125 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 25 13:48:01.473: INFO: rc: 1 Dec 25 13:48:01.473: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1125 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00235c2a0 exit status 1 true [0xc000984b30 0xc000984bb0 0xc000984ca8] [0xc000984b30 0xc000984bb0 0xc000984ca8] [0xc000984b78 0xc000984c88] [0xba6c50 0xba6c50] 0xc002ca63c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 25 13:48:11.474: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1125 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 25 13:48:11.638: INFO: rc: 1 Dec 25 13:48:11.639: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1125 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00235c390 exit status 1 true [0xc000984cf8 0xc000984e60 0xc000984f20] [0xc000984cf8 0xc000984e60 0xc000984f20] [0xc000984e28 0xc000984ef0] [0xba6c50 0xba6c50] 0xc002ca66c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 25 13:48:21.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1125 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 25 13:48:21.796: INFO: rc: 1 Dec 25 13:48:21.797: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1125 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00235c450 exit status 1 true [0xc000984f78 0xc000984fd8 0xc000985108] [0xc000984f78 0xc000984fd8 0xc000985108] [0xc000984fc8 0xc000985088] [0xba6c50 0xba6c50] 0xc002ca69c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 25 13:48:31.798: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1125 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 25 13:48:31.962: INFO: rc: 1 Dec 25 13:48:31.963: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1125 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001f1c0c0 exit status 1 true [0xc000351d20 0xc000351f20 0xc001994018] [0xc000351d20 0xc000351f20 0xc001994018] [0xc000351ee8 0xc001994008] [0xba6c50 0xba6c50] 0xc0026ce5a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 25 13:48:41.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1125 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 25 13:48:42.131: INFO: rc: 1 Dec 25 13:48:42.132: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1125 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00235c120 exit status 1 true [0xc000984000 0xc000984398 0xc0009844a0] [0xc000984000 0xc000984398 0xc0009844a0] [0xc000984320 0xc000984428] [0xba6c50 0xba6c50] 0xc0026af500 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 25 13:48:52.133: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1125 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 25 13:48:52.279: INFO: rc: 1 Dec 25 13:48:52.279: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1125 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00064d920 exit status 1 true [0xc000010050 0xc000011588 0xc0000115f8] [0xc000010050 0xc000011588 0xc0000115f8] [0xc000010290 0xc0000115d8] [0xba6c50 0xba6c50] 0xc002660180 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 25 13:49:02.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1125 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 25 13:49:02.460: INFO: rc: 1 Dec 25 13:49:02.461: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1125 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00235c240 exit status 1 true [0xc0009844e8 0xc0009845a8 0xc000984740] [0xc0009844e8 0xc0009845a8 0xc000984740] [0xc000984580 0xc000984660] [0xba6c50 0xba6c50] 0xc002ca6180 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 25 13:49:12.463: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1125 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 25 13:49:12.639: INFO: rc: 1 Dec 25 13:49:12.640: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1125 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0029b0090 exit status 1 true [0xc00037c108 0xc00037cac0 0xc00037cb70] [0xc00037c108 0xc00037cac0 0xc00037cb70] [0xc00037cab0 0xc00037cb28] [0xba6c50 0xba6c50] 0xc002cfa360 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 25 13:49:22.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1125 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 25 13:49:22.742: INFO: rc: 1 Dec 25 13:49:22.742: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1125 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001f1c1e0 exit status 1 true [0xc001994028 0xc001994048 0xc0019940f8] [0xc001994028 0xc001994048 0xc0019940f8] [0xc001994040 0xc0019940d8] [0xba6c50 0xba6c50] 0xc0026cf320 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 25 13:49:32.743: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1125 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 25 13:49:32.959: INFO: rc: 1 Dec 25 13:49:32.959: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1125 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001f1c2a0 exit status 1 true [0xc001994128 0xc001994190 0xc0019941f0] [0xc001994128 0xc001994190 0xc0019941f0] [0xc001994180 0xc0019941d8] [0xba6c50 0xba6c50] 0xc0026cfec0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 25 13:49:42.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1125 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 25 13:49:43.145: INFO: rc: 1 Dec 25 13:49:43.145: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1125 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001f1c390 exit status 1 true [0xc001994210 0xc001994250 0xc0019942b8] [0xc001994210 0xc001994250 0xc0019942b8] [0xc001994248 0xc001994298] [0xba6c50 0xba6c50] 0xc0028f7500 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 25 13:49:53.146: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1125 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 25 13:49:53.275: INFO: rc: 1 Dec 25 13:49:53.275: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1125 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00064da70 exit status 1 true [0xc000011630 0xc000011720 0xc000011818] [0xc000011630 0xc000011720 0xc000011818] [0xc000011670 0xc0000117c0] [0xba6c50 0xba6c50] 0xc002660480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 25 13:50:03.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1125 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 25 13:50:03.420: INFO: rc: 1 Dec 25 13:50:03.420: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1125 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00064db30 exit status 1 true [0xc000011848 0xc0000118b8 0xc000011a08] [0xc000011848 0xc0000118b8 0xc000011a08] [0xc000011870 0xc000011948] [0xba6c50 0xba6c50] 0xc0026607e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 25 13:50:13.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1125 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 25 13:50:13.584: INFO: rc: 1 Dec 25 13:50:13.585: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1125 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0029b0210 exit status 1 true [0xc00037cbc8 0xc00037d358 0xc00037d5c8] [0xc00037cbc8 0xc00037d358 0xc00037d5c8] [0xc00037d338 0xc00037d5b8] [0xba6c50 0xba6c50] 0xc002cfa720 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 25 13:50:23.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1125 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 25 13:50:23.766: INFO: rc: 1 Dec 25 13:50:23.766: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1125 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0029b0300 exit status 1 true [0xc00037d5f8 0xc00037d6a8 0xc00037d720] [0xc00037d5f8 0xc00037d6a8 0xc00037d720] [0xc00037d658 0xc00037d6e8] [0xba6c50 0xba6c50] 0xc002cfaa20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 25 13:50:33.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1125 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 25 13:50:34.045: INFO: rc: 1 Dec 25 13:50:34.045: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1125 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001f1c090 exit status 1 true [0xc000351d20 0xc000351f20 0xc001994018] [0xc000351d20 0xc000351f20 0xc001994018] [0xc000351ee8 0xc001994008] [0xba6c50 0xba6c50] 0xc0026aede0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 25 13:50:44.047: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1125 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 25 13:50:44.173: INFO: rc: 1 Dec 25 13:50:44.174: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1125 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00235c0c0 exit status 1 true [0xc000010050 0xc000011588 0xc0000115f8] [0xc000010050 0xc000011588 0xc0000115f8] [0xc000010290 0xc0000115d8] [0xba6c50 0xba6c50] 0xc0026ceae0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 25 13:50:54.175: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1125 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 25 13:50:54.271: INFO: rc: 1 Dec 25 13:50:54.271: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1125 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001f1c180 exit status 1 true [0xc001994028 0xc001994048 0xc0019940f8] [0xc001994028 0xc001994048 0xc0019940f8] [0xc001994040 0xc0019940d8] [0xba6c50 0xba6c50] 0xc00194f9e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 25 13:51:04.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1125 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 25 13:51:04.436: INFO: rc: 1 Dec 25 13:51:04.436: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1125 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0029b00f0 exit status 1 true [0xc000984000 0xc000984398 0xc0009844a0] [0xc000984000 0xc000984398 0xc0009844a0] [0xc000984320 0xc000984428] [0xba6c50 0xba6c50] 0xc002660240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 25 13:51:14.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1125 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 25 13:51:14.609: INFO: rc: 1 Dec 25 13:51:14.610: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: Dec 25 13:51:14.610: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Dec 25 13:51:14.659: INFO: Deleting all statefulset in ns statefulset-1125 Dec 25 13:51:14.664: INFO: Scaling statefulset ss to 0 Dec 25 13:51:14.675: INFO: Waiting for statefulset status.replicas updated to 0 Dec 25 13:51:14.677: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:51:14.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1125" for this suite. Dec 25 13:51:20.728: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:51:20.830: INFO: namespace statefulset-1125 deletion completed in 6.12490695s • [SLOW TEST:377.042 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:51:20.830: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-9a5e4249-9931-4a5b-ac69-b6a765cd7809 STEP: Creating a pod to test consume secrets Dec 25 13:51:21.032: INFO: Waiting up to 5m0s for pod "pod-secrets-f998c1f0-6f59-4489-9cf4-f3e7e6402aab" in namespace "secrets-7486" to be "success or failure" Dec 25 13:51:21.041: INFO: Pod "pod-secrets-f998c1f0-6f59-4489-9cf4-f3e7e6402aab": Phase="Pending", Reason="", readiness=false. Elapsed: 9.044175ms Dec 25 13:51:23.050: INFO: Pod "pod-secrets-f998c1f0-6f59-4489-9cf4-f3e7e6402aab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018074288s Dec 25 13:51:25.060: INFO: Pod "pod-secrets-f998c1f0-6f59-4489-9cf4-f3e7e6402aab": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027820485s Dec 25 13:51:27.071: INFO: Pod "pod-secrets-f998c1f0-6f59-4489-9cf4-f3e7e6402aab": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038726875s Dec 25 13:51:29.081: INFO: Pod "pod-secrets-f998c1f0-6f59-4489-9cf4-f3e7e6402aab": Phase="Running", Reason="", readiness=true. Elapsed: 8.0492946s Dec 25 13:51:31.102: INFO: Pod "pod-secrets-f998c1f0-6f59-4489-9cf4-f3e7e6402aab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.07064721s STEP: Saw pod success Dec 25 13:51:31.103: INFO: Pod "pod-secrets-f998c1f0-6f59-4489-9cf4-f3e7e6402aab" satisfied condition "success or failure" Dec 25 13:51:31.134: INFO: Trying to get logs from node iruya-node pod pod-secrets-f998c1f0-6f59-4489-9cf4-f3e7e6402aab container secret-volume-test: STEP: delete the pod Dec 25 13:51:31.397: INFO: Waiting for pod pod-secrets-f998c1f0-6f59-4489-9cf4-f3e7e6402aab to disappear Dec 25 13:51:31.441: INFO: Pod pod-secrets-f998c1f0-6f59-4489-9cf4-f3e7e6402aab no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:51:31.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7486" for this suite. Dec 25 13:51:37.480: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:51:37.588: INFO: namespace secrets-7486 deletion completed in 6.141983567s • [SLOW TEST:16.758 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:51:37.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 25 13:51:37.750: INFO: Creating deployment "test-recreate-deployment" Dec 25 13:51:37.760: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Dec 25 13:51:37.849: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Dec 25 13:51:39.866: INFO: Waiting deployment "test-recreate-deployment" to complete Dec 25 13:51:39.871: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712878697, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712878697, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712878698, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712878697, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 25 13:51:41.888: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712878697, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712878697, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712878698, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712878697, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 25 13:51:43.894: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712878697, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712878697, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712878698, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712878697, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 25 13:51:45.885: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712878697, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712878697, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712878698, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712878697, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 25 13:51:47.882: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Dec 25 13:51:47.897: INFO: Updating deployment test-recreate-deployment Dec 25 13:51:47.897: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Dec 25 13:51:48.235: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-9885,SelfLink:/apis/apps/v1/namespaces/deployment-9885/deployments/test-recreate-deployment,UID:8cfc040c-20d1-4e3d-b535-b3e941da1939,ResourceVersion:18020290,Generation:2,CreationTimestamp:2019-12-25 13:51:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2019-12-25 13:51:48 +0000 UTC 2019-12-25 13:51:48 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2019-12-25 13:51:48 +0000 UTC 2019-12-25 13:51:37 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Dec 25 13:51:48.246: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-9885,SelfLink:/apis/apps/v1/namespaces/deployment-9885/replicasets/test-recreate-deployment-5c8c9cc69d,UID:5484c14d-0989-43c7-b33b-b4291a63ca20,ResourceVersion:18020288,Generation:1,CreationTimestamp:2019-12-25 13:51:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 8cfc040c-20d1-4e3d-b535-b3e941da1939 0xc0024c69a7 0xc0024c69a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Dec 25 13:51:48.246: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Dec 25 13:51:48.247: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-9885,SelfLink:/apis/apps/v1/namespaces/deployment-9885/replicasets/test-recreate-deployment-6df85df6b9,UID:1d7cdc22-d434-45a1-a2d4-cf8f07b02634,ResourceVersion:18020278,Generation:2,CreationTimestamp:2019-12-25 13:51:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 8cfc040c-20d1-4e3d-b535-b3e941da1939 0xc0024c6a77 0xc0024c6a78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Dec 25 13:51:48.255: INFO: Pod "test-recreate-deployment-5c8c9cc69d-wd5rs" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-wd5rs,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-9885,SelfLink:/api/v1/namespaces/deployment-9885/pods/test-recreate-deployment-5c8c9cc69d-wd5rs,UID:b06e4ea1-3ed9-492a-99cd-d91eabc8c9c3,ResourceVersion:18020291,Generation:0,CreationTimestamp:2019-12-25 13:51:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 5484c14d-0989-43c7-b33b-b4291a63ca20 0xc0024c7357 0xc0024c7358}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-srtwn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-srtwn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-srtwn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024c73d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024c73f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:51:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:51:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:51:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 13:51:48 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-25 13:51:48 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:51:48.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9885" for this suite. Dec 25 13:51:54.292: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:51:54.389: INFO: namespace deployment-9885 deletion completed in 6.128567516s • [SLOW TEST:16.800 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:51:54.390: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Dec 25 13:51:54.646: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-5984,SelfLink:/api/v1/namespaces/watch-5984/configmaps/e2e-watch-test-resource-version,UID:d11d7418-b900-4798-917b-47a7e376a0ff,ResourceVersion:18020329,Generation:0,CreationTimestamp:2019-12-25 13:51:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Dec 25 13:51:54.647: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-5984,SelfLink:/api/v1/namespaces/watch-5984/configmaps/e2e-watch-test-resource-version,UID:d11d7418-b900-4798-917b-47a7e376a0ff,ResourceVersion:18020330,Generation:0,CreationTimestamp:2019-12-25 13:51:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:51:54.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5984" for this suite. Dec 25 13:52:00.739: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:52:00.910: INFO: namespace watch-5984 deletion completed in 6.255224819s • [SLOW TEST:6.520 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:52:00.911: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-1e1f32af-4a06-4650-9722-dcb970ca8d13 STEP: Creating a pod to test consume configMaps Dec 25 13:52:01.036: INFO: Waiting up to 5m0s for pod "pod-configmaps-798b1043-fe3e-4acf-90b3-edf9b7031e59" in namespace "configmap-9512" to be "success or failure" Dec 25 13:52:01.045: INFO: Pod "pod-configmaps-798b1043-fe3e-4acf-90b3-edf9b7031e59": Phase="Pending", Reason="", readiness=false. Elapsed: 8.616301ms Dec 25 13:52:03.053: INFO: Pod "pod-configmaps-798b1043-fe3e-4acf-90b3-edf9b7031e59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017066273s Dec 25 13:52:05.065: INFO: Pod "pod-configmaps-798b1043-fe3e-4acf-90b3-edf9b7031e59": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028785008s Dec 25 13:52:07.077: INFO: Pod "pod-configmaps-798b1043-fe3e-4acf-90b3-edf9b7031e59": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041384283s Dec 25 13:52:09.084: INFO: Pod "pod-configmaps-798b1043-fe3e-4acf-90b3-edf9b7031e59": Phase="Pending", Reason="", readiness=false. Elapsed: 8.047850517s Dec 25 13:52:11.119: INFO: Pod "pod-configmaps-798b1043-fe3e-4acf-90b3-edf9b7031e59": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.08287671s STEP: Saw pod success Dec 25 13:52:11.119: INFO: Pod "pod-configmaps-798b1043-fe3e-4acf-90b3-edf9b7031e59" satisfied condition "success or failure" Dec 25 13:52:11.122: INFO: Trying to get logs from node iruya-node pod pod-configmaps-798b1043-fe3e-4acf-90b3-edf9b7031e59 container configmap-volume-test: STEP: delete the pod Dec 25 13:52:11.280: INFO: Waiting for pod pod-configmaps-798b1043-fe3e-4acf-90b3-edf9b7031e59 to disappear Dec 25 13:52:11.319: INFO: Pod pod-configmaps-798b1043-fe3e-4acf-90b3-edf9b7031e59 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:52:11.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9512" for this suite. Dec 25 13:52:17.363: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:52:17.562: INFO: namespace configmap-9512 deletion completed in 6.236129366s • [SLOW TEST:16.651 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:52:17.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Dec 25 13:52:17.710: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-3670' Dec 25 13:52:17.916: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Dec 25 13:52:17.916: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617 Dec 25 13:52:17.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-3670' Dec 25 13:52:18.092: INFO: stderr: "" Dec 25 13:52:18.092: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:52:18.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3670" for this suite. Dec 25 13:52:24.191: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:52:24.363: INFO: namespace kubectl-3670 deletion completed in 6.26313246s • [SLOW TEST:6.801 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:52:24.364: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Dec 25 13:52:35.205: INFO: Successfully updated pod "pod-update-activedeadlineseconds-9dc5ea29-db71-43d3-b727-4621e473c786" Dec 25 13:52:35.205: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-9dc5ea29-db71-43d3-b727-4621e473c786" in namespace "pods-3631" to be "terminated due to deadline exceeded" Dec 25 13:52:35.215: INFO: Pod "pod-update-activedeadlineseconds-9dc5ea29-db71-43d3-b727-4621e473c786": Phase="Running", Reason="", readiness=true. Elapsed: 9.437318ms Dec 25 13:52:37.229: INFO: Pod "pod-update-activedeadlineseconds-9dc5ea29-db71-43d3-b727-4621e473c786": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.023980851s Dec 25 13:52:37.229: INFO: Pod "pod-update-activedeadlineseconds-9dc5ea29-db71-43d3-b727-4621e473c786" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:52:37.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3631" for this suite. Dec 25 13:52:43.280: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:52:43.447: INFO: namespace pods-3631 deletion completed in 6.208409247s • [SLOW TEST:19.084 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:52:43.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 25 13:52:43.588: INFO: Create a RollingUpdate DaemonSet Dec 25 13:52:43.595: INFO: Check that daemon pods launch on every node of the cluster Dec 25 13:52:43.619: INFO: Number of nodes with available pods: 0 Dec 25 13:52:43.619: INFO: Node iruya-node is running more than one daemon pod Dec 25 13:52:44.632: INFO: Number of nodes with available pods: 0 Dec 25 13:52:44.632: INFO: Node iruya-node is running more than one daemon pod Dec 25 13:52:46.091: INFO: Number of nodes with available pods: 0 Dec 25 13:52:46.092: INFO: Node iruya-node is running more than one daemon pod Dec 25 13:52:46.847: INFO: Number of nodes with available pods: 0 Dec 25 13:52:46.847: INFO: Node iruya-node is running more than one daemon pod Dec 25 13:52:47.649: INFO: Number of nodes with available pods: 0 Dec 25 13:52:47.649: INFO: Node iruya-node is running more than one daemon pod Dec 25 13:52:48.653: INFO: Number of nodes with available pods: 0 Dec 25 13:52:48.654: INFO: Node iruya-node is running more than one daemon pod Dec 25 13:52:50.073: INFO: Number of nodes with available pods: 0 Dec 25 13:52:50.073: INFO: Node iruya-node is running more than one daemon pod Dec 25 13:52:50.684: INFO: Number of nodes with available pods: 0 Dec 25 13:52:50.685: INFO: Node iruya-node is running more than one daemon pod Dec 25 13:52:52.210: INFO: Number of nodes with available pods: 0 Dec 25 13:52:52.211: INFO: Node iruya-node is running more than one daemon pod Dec 25 13:52:52.630: INFO: Number of nodes with available pods: 0 Dec 25 13:52:52.630: INFO: Node iruya-node is running more than one daemon pod Dec 25 13:52:53.639: INFO: Number of nodes with available pods: 0 Dec 25 13:52:53.639: INFO: Node iruya-node is running more than one daemon pod Dec 25 13:52:54.634: INFO: Number of nodes with available pods: 1 Dec 25 13:52:54.634: INFO: Node iruya-node is running more than one daemon pod Dec 25 13:52:55.635: INFO: Number of nodes with available pods: 2 Dec 25 13:52:55.635: INFO: Number of running nodes: 2, number of available pods: 2 Dec 25 13:52:55.635: INFO: Update the DaemonSet to trigger a rollout Dec 25 13:52:55.649: INFO: Updating DaemonSet daemon-set Dec 25 13:53:09.692: INFO: Roll back the DaemonSet before rollout is complete Dec 25 13:53:09.708: INFO: Updating DaemonSet daemon-set Dec 25 13:53:09.709: INFO: Make sure DaemonSet rollback is complete Dec 25 13:53:09.718: INFO: Wrong image for pod: daemon-set-qc9bh. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Dec 25 13:53:09.718: INFO: Pod daemon-set-qc9bh is not available Dec 25 13:53:10.766: INFO: Wrong image for pod: daemon-set-qc9bh. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Dec 25 13:53:10.766: INFO: Pod daemon-set-qc9bh is not available Dec 25 13:53:11.763: INFO: Wrong image for pod: daemon-set-qc9bh. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Dec 25 13:53:11.763: INFO: Pod daemon-set-qc9bh is not available Dec 25 13:53:13.064: INFO: Wrong image for pod: daemon-set-qc9bh. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Dec 25 13:53:13.064: INFO: Pod daemon-set-qc9bh is not available Dec 25 13:53:13.765: INFO: Pod daemon-set-pjqd4 is not available [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3805, will wait for the garbage collector to delete the pods Dec 25 13:53:13.881: INFO: Deleting DaemonSet.extensions daemon-set took: 29.646346ms Dec 25 13:53:14.783: INFO: Terminating DaemonSet.extensions daemon-set pods took: 901.859005ms Dec 25 13:53:21.074: INFO: Number of nodes with available pods: 0 Dec 25 13:53:21.074: INFO: Number of running nodes: 0, number of available pods: 0 Dec 25 13:53:21.082: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3805/daemonsets","resourceVersion":"18020599"},"items":null} Dec 25 13:53:21.089: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3805/pods","resourceVersion":"18020599"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:53:21.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3805" for this suite. Dec 25 13:53:27.159: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:53:27.349: INFO: namespace daemonsets-3805 deletion completed in 6.222536893s • [SLOW TEST:43.902 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:53:27.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 25 13:53:27.518: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3bbcb6f7-8858-47e1-b17b-62e49ab451f5" in namespace "downward-api-3670" to be "success or failure" Dec 25 13:53:27.533: INFO: Pod "downwardapi-volume-3bbcb6f7-8858-47e1-b17b-62e49ab451f5": Phase="Pending", Reason="", readiness=false. Elapsed: 14.169229ms Dec 25 13:53:29.546: INFO: Pod "downwardapi-volume-3bbcb6f7-8858-47e1-b17b-62e49ab451f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027326634s Dec 25 13:53:31.595: INFO: Pod "downwardapi-volume-3bbcb6f7-8858-47e1-b17b-62e49ab451f5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076629709s Dec 25 13:53:33.608: INFO: Pod "downwardapi-volume-3bbcb6f7-8858-47e1-b17b-62e49ab451f5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.089182706s Dec 25 13:53:35.617: INFO: Pod "downwardapi-volume-3bbcb6f7-8858-47e1-b17b-62e49ab451f5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.098617408s Dec 25 13:53:37.627: INFO: Pod "downwardapi-volume-3bbcb6f7-8858-47e1-b17b-62e49ab451f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.10879465s STEP: Saw pod success Dec 25 13:53:37.627: INFO: Pod "downwardapi-volume-3bbcb6f7-8858-47e1-b17b-62e49ab451f5" satisfied condition "success or failure" Dec 25 13:53:37.633: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-3bbcb6f7-8858-47e1-b17b-62e49ab451f5 container client-container: STEP: delete the pod Dec 25 13:53:37.780: INFO: Waiting for pod downwardapi-volume-3bbcb6f7-8858-47e1-b17b-62e49ab451f5 to disappear Dec 25 13:53:37.814: INFO: Pod downwardapi-volume-3bbcb6f7-8858-47e1-b17b-62e49ab451f5 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:53:37.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3670" for this suite. Dec 25 13:53:43.851: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:53:43.987: INFO: namespace downward-api-3670 deletion completed in 6.158906526s • [SLOW TEST:16.636 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:53:43.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-7060653f-234a-4bdb-a684-5f989ae9f204 STEP: Creating a pod to test consume configMaps Dec 25 13:53:44.113: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1650c7e9-7717-4d11-9198-6b50dd3aa25a" in namespace "projected-6885" to be "success or failure" Dec 25 13:53:44.132: INFO: Pod "pod-projected-configmaps-1650c7e9-7717-4d11-9198-6b50dd3aa25a": Phase="Pending", Reason="", readiness=false. Elapsed: 18.825637ms Dec 25 13:53:46.143: INFO: Pod "pod-projected-configmaps-1650c7e9-7717-4d11-9198-6b50dd3aa25a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02926101s Dec 25 13:53:48.154: INFO: Pod "pod-projected-configmaps-1650c7e9-7717-4d11-9198-6b50dd3aa25a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040700656s Dec 25 13:53:50.926: INFO: Pod "pod-projected-configmaps-1650c7e9-7717-4d11-9198-6b50dd3aa25a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.812725794s Dec 25 13:53:52.934: INFO: Pod "pod-projected-configmaps-1650c7e9-7717-4d11-9198-6b50dd3aa25a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.821106064s Dec 25 13:53:54.945: INFO: Pod "pod-projected-configmaps-1650c7e9-7717-4d11-9198-6b50dd3aa25a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.831342097s STEP: Saw pod success Dec 25 13:53:54.945: INFO: Pod "pod-projected-configmaps-1650c7e9-7717-4d11-9198-6b50dd3aa25a" satisfied condition "success or failure" Dec 25 13:53:54.949: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-1650c7e9-7717-4d11-9198-6b50dd3aa25a container projected-configmap-volume-test: STEP: delete the pod Dec 25 13:53:55.076: INFO: Waiting for pod pod-projected-configmaps-1650c7e9-7717-4d11-9198-6b50dd3aa25a to disappear Dec 25 13:53:55.084: INFO: Pod pod-projected-configmaps-1650c7e9-7717-4d11-9198-6b50dd3aa25a no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:53:55.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6885" for this suite. Dec 25 13:54:01.119: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:54:01.294: INFO: namespace projected-6885 deletion completed in 6.20361349s • [SLOW TEST:17.307 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:54:01.294: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-downwardapi-xvcq STEP: Creating a pod to test atomic-volume-subpath Dec 25 13:54:01.480: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-xvcq" in namespace "subpath-5112" to be "success or failure" Dec 25 13:54:01.485: INFO: Pod "pod-subpath-test-downwardapi-xvcq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.707977ms Dec 25 13:54:03.546: INFO: Pod "pod-subpath-test-downwardapi-xvcq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065349353s Dec 25 13:54:05.559: INFO: Pod "pod-subpath-test-downwardapi-xvcq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.078182746s Dec 25 13:54:07.569: INFO: Pod "pod-subpath-test-downwardapi-xvcq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.088020054s Dec 25 13:54:09.579: INFO: Pod "pod-subpath-test-downwardapi-xvcq": Phase="Pending", Reason="", readiness=false. Elapsed: 8.098634317s Dec 25 13:54:11.589: INFO: Pod "pod-subpath-test-downwardapi-xvcq": Phase="Running", Reason="", readiness=true. Elapsed: 10.108175949s Dec 25 13:54:13.597: INFO: Pod "pod-subpath-test-downwardapi-xvcq": Phase="Running", Reason="", readiness=true. Elapsed: 12.116813684s Dec 25 13:54:15.606: INFO: Pod "pod-subpath-test-downwardapi-xvcq": Phase="Running", Reason="", readiness=true. Elapsed: 14.125681481s Dec 25 13:54:17.614: INFO: Pod "pod-subpath-test-downwardapi-xvcq": Phase="Running", Reason="", readiness=true. Elapsed: 16.133534358s Dec 25 13:54:19.635: INFO: Pod "pod-subpath-test-downwardapi-xvcq": Phase="Running", Reason="", readiness=true. Elapsed: 18.154601867s Dec 25 13:54:21.777: INFO: Pod "pod-subpath-test-downwardapi-xvcq": Phase="Running", Reason="", readiness=true. Elapsed: 20.296315397s Dec 25 13:54:23.814: INFO: Pod "pod-subpath-test-downwardapi-xvcq": Phase="Running", Reason="", readiness=true. Elapsed: 22.333221075s Dec 25 13:54:25.830: INFO: Pod "pod-subpath-test-downwardapi-xvcq": Phase="Running", Reason="", readiness=true. Elapsed: 24.348995233s Dec 25 13:54:27.838: INFO: Pod "pod-subpath-test-downwardapi-xvcq": Phase="Running", Reason="", readiness=true. Elapsed: 26.357181538s Dec 25 13:54:29.846: INFO: Pod "pod-subpath-test-downwardapi-xvcq": Phase="Running", Reason="", readiness=true. Elapsed: 28.365585817s Dec 25 13:54:31.859: INFO: Pod "pod-subpath-test-downwardapi-xvcq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.378655691s STEP: Saw pod success Dec 25 13:54:31.859: INFO: Pod "pod-subpath-test-downwardapi-xvcq" satisfied condition "success or failure" Dec 25 13:54:31.866: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-downwardapi-xvcq container test-container-subpath-downwardapi-xvcq: STEP: delete the pod Dec 25 13:54:31.962: INFO: Waiting for pod pod-subpath-test-downwardapi-xvcq to disappear Dec 25 13:54:31.981: INFO: Pod pod-subpath-test-downwardapi-xvcq no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-xvcq Dec 25 13:54:31.981: INFO: Deleting pod "pod-subpath-test-downwardapi-xvcq" in namespace "subpath-5112" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:54:31.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5112" for this suite. Dec 25 13:54:38.082: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:54:38.209: INFO: namespace subpath-5112 deletion completed in 6.181338001s • [SLOW TEST:36.915 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:54:38.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Dec 25 13:54:38.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-1573' Dec 25 13:54:40.838: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Dec 25 13:54:40.838: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426 Dec 25 13:54:42.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-1573' Dec 25 13:54:43.013: INFO: stderr: "" Dec 25 13:54:43.013: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:54:43.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1573" for this suite. Dec 25 13:54:49.039: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:54:49.135: INFO: namespace kubectl-1573 deletion completed in 6.115807202s • [SLOW TEST:10.926 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:54:49.135: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-6929, will wait for the garbage collector to delete the pods Dec 25 13:55:01.406: INFO: Deleting Job.batch foo took: 42.786591ms Dec 25 13:55:01.807: INFO: Terminating Job.batch foo pods took: 400.702725ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:55:46.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-6929" for this suite. Dec 25 13:55:52.755: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:55:53.079: INFO: namespace job-6929 deletion completed in 6.350950691s • [SLOW TEST:63.944 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:55:53.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Dec 25 13:55:53.265: INFO: Number of nodes with available pods: 0 Dec 25 13:55:53.265: INFO: Node iruya-node is running more than one daemon pod Dec 25 13:55:54.284: INFO: Number of nodes with available pods: 0 Dec 25 13:55:54.284: INFO: Node iruya-node is running more than one daemon pod Dec 25 13:55:55.280: INFO: Number of nodes with available pods: 0 Dec 25 13:55:55.280: INFO: Node iruya-node is running more than one daemon pod Dec 25 13:55:56.285: INFO: Number of nodes with available pods: 0 Dec 25 13:55:56.285: INFO: Node iruya-node is running more than one daemon pod Dec 25 13:55:57.279: INFO: Number of nodes with available pods: 0 Dec 25 13:55:57.280: INFO: Node iruya-node is running more than one daemon pod Dec 25 13:55:58.291: INFO: Number of nodes with available pods: 0 Dec 25 13:55:58.291: INFO: Node iruya-node is running more than one daemon pod Dec 25 13:55:59.331: INFO: Number of nodes with available pods: 0 Dec 25 13:55:59.331: INFO: Node iruya-node is running more than one daemon pod Dec 25 13:56:00.654: INFO: Number of nodes with available pods: 0 Dec 25 13:56:00.654: INFO: Node iruya-node is running more than one daemon pod Dec 25 13:56:01.283: INFO: Number of nodes with available pods: 0 Dec 25 13:56:01.283: INFO: Node iruya-node is running more than one daemon pod Dec 25 13:56:02.289: INFO: Number of nodes with available pods: 0 Dec 25 13:56:02.289: INFO: Node iruya-node is running more than one daemon pod Dec 25 13:56:03.282: INFO: Number of nodes with available pods: 0 Dec 25 13:56:03.283: INFO: Node iruya-node is running more than one daemon pod Dec 25 13:56:04.280: INFO: Number of nodes with available pods: 2 Dec 25 13:56:04.280: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Dec 25 13:56:04.382: INFO: Number of nodes with available pods: 1 Dec 25 13:56:04.383: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Dec 25 13:56:05.875: INFO: Number of nodes with available pods: 1 Dec 25 13:56:05.875: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Dec 25 13:56:06.970: INFO: Number of nodes with available pods: 1 Dec 25 13:56:06.971: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Dec 25 13:56:07.487: INFO: Number of nodes with available pods: 1 Dec 25 13:56:07.487: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Dec 25 13:56:08.583: INFO: Number of nodes with available pods: 1 Dec 25 13:56:08.583: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Dec 25 13:56:09.400: INFO: Number of nodes with available pods: 1 Dec 25 13:56:09.400: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Dec 25 13:56:11.174: INFO: Number of nodes with available pods: 1 Dec 25 13:56:11.174: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Dec 25 13:56:11.400: INFO: Number of nodes with available pods: 1 Dec 25 13:56:11.400: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Dec 25 13:56:12.445: INFO: Number of nodes with available pods: 1 Dec 25 13:56:12.445: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Dec 25 13:56:13.399: INFO: Number of nodes with available pods: 2 Dec 25 13:56:13.400: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3260, will wait for the garbage collector to delete the pods Dec 25 13:56:13.480: INFO: Deleting DaemonSet.extensions daemon-set took: 12.446242ms Dec 25 13:56:13.780: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.61305ms Dec 25 13:56:27.900: INFO: Number of nodes with available pods: 0 Dec 25 13:56:27.900: INFO: Number of running nodes: 0, number of available pods: 0 Dec 25 13:56:27.905: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3260/daemonsets","resourceVersion":"18021088"},"items":null} Dec 25 13:56:27.908: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3260/pods","resourceVersion":"18021088"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:56:27.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3260" for this suite. Dec 25 13:56:33.956: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:56:34.073: INFO: namespace daemonsets-3260 deletion completed in 6.148403909s • [SLOW TEST:40.994 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:56:34.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Dec 25 13:56:34.212: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-7080,SelfLink:/api/v1/namespaces/watch-7080/configmaps/e2e-watch-test-watch-closed,UID:8d73d72a-0921-4286-b24b-0f18f314279a,ResourceVersion:18021125,Generation:0,CreationTimestamp:2019-12-25 13:56:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Dec 25 13:56:34.213: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-7080,SelfLink:/api/v1/namespaces/watch-7080/configmaps/e2e-watch-test-watch-closed,UID:8d73d72a-0921-4286-b24b-0f18f314279a,ResourceVersion:18021126,Generation:0,CreationTimestamp:2019-12-25 13:56:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Dec 25 13:56:34.246: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-7080,SelfLink:/api/v1/namespaces/watch-7080/configmaps/e2e-watch-test-watch-closed,UID:8d73d72a-0921-4286-b24b-0f18f314279a,ResourceVersion:18021127,Generation:0,CreationTimestamp:2019-12-25 13:56:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Dec 25 13:56:34.246: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-7080,SelfLink:/api/v1/namespaces/watch-7080/configmaps/e2e-watch-test-watch-closed,UID:8d73d72a-0921-4286-b24b-0f18f314279a,ResourceVersion:18021128,Generation:0,CreationTimestamp:2019-12-25 13:56:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:56:34.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7080" for this suite. Dec 25 13:56:40.291: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:56:40.437: INFO: namespace watch-7080 deletion completed in 6.164126256s • [SLOW TEST:6.363 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:56:40.437: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Dec 25 13:56:40.607: INFO: Waiting up to 5m0s for pod "pod-1e92f747-f26e-4351-bd66-1fb067c588ba" in namespace "emptydir-5075" to be "success or failure" Dec 25 13:56:40.627: INFO: Pod "pod-1e92f747-f26e-4351-bd66-1fb067c588ba": Phase="Pending", Reason="", readiness=false. Elapsed: 19.381327ms Dec 25 13:56:42.635: INFO: Pod "pod-1e92f747-f26e-4351-bd66-1fb067c588ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02696388s Dec 25 13:56:44.665: INFO: Pod "pod-1e92f747-f26e-4351-bd66-1fb067c588ba": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057418685s Dec 25 13:56:46.686: INFO: Pod "pod-1e92f747-f26e-4351-bd66-1fb067c588ba": Phase="Pending", Reason="", readiness=false. Elapsed: 6.078203433s Dec 25 13:56:48.709: INFO: Pod "pod-1e92f747-f26e-4351-bd66-1fb067c588ba": Phase="Pending", Reason="", readiness=false. Elapsed: 8.100816471s Dec 25 13:56:50.715: INFO: Pod "pod-1e92f747-f26e-4351-bd66-1fb067c588ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.106974783s STEP: Saw pod success Dec 25 13:56:50.715: INFO: Pod "pod-1e92f747-f26e-4351-bd66-1fb067c588ba" satisfied condition "success or failure" Dec 25 13:56:50.719: INFO: Trying to get logs from node iruya-node pod pod-1e92f747-f26e-4351-bd66-1fb067c588ba container test-container: STEP: delete the pod Dec 25 13:56:50.788: INFO: Waiting for pod pod-1e92f747-f26e-4351-bd66-1fb067c588ba to disappear Dec 25 13:56:50.795: INFO: Pod pod-1e92f747-f26e-4351-bd66-1fb067c588ba no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:56:50.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5075" for this suite. Dec 25 13:56:56.960: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:56:57.063: INFO: namespace emptydir-5075 deletion completed in 6.262412936s • [SLOW TEST:16.626 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:56:57.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating api versions Dec 25 13:56:57.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Dec 25 13:56:57.503: INFO: stderr: "" Dec 25 13:56:57.504: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:56:57.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8805" for this suite. Dec 25 13:57:03.544: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:57:03.690: INFO: namespace kubectl-8805 deletion completed in 6.175619483s • [SLOW TEST:6.626 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:57:03.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Dec 25 13:57:13.028: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:57:13.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-228" for this suite. Dec 25 13:57:19.200: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:57:19.322: INFO: namespace container-runtime-228 deletion completed in 6.182261438s • [SLOW TEST:15.631 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:57:19.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Dec 25 13:57:19.445: INFO: Waiting up to 5m0s for pod "pod-e9402b26-2e7a-437d-a4c3-4f5a3abac72f" in namespace "emptydir-4385" to be "success or failure" Dec 25 13:57:19.453: INFO: Pod "pod-e9402b26-2e7a-437d-a4c3-4f5a3abac72f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.969853ms Dec 25 13:57:21.463: INFO: Pod "pod-e9402b26-2e7a-437d-a4c3-4f5a3abac72f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018383691s Dec 25 13:57:23.482: INFO: Pod "pod-e9402b26-2e7a-437d-a4c3-4f5a3abac72f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037439133s Dec 25 13:57:25.493: INFO: Pod "pod-e9402b26-2e7a-437d-a4c3-4f5a3abac72f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04833896s Dec 25 13:57:27.510: INFO: Pod "pod-e9402b26-2e7a-437d-a4c3-4f5a3abac72f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.06509827s Dec 25 13:57:29.518: INFO: Pod "pod-e9402b26-2e7a-437d-a4c3-4f5a3abac72f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.072944179s STEP: Saw pod success Dec 25 13:57:29.518: INFO: Pod "pod-e9402b26-2e7a-437d-a4c3-4f5a3abac72f" satisfied condition "success or failure" Dec 25 13:57:29.522: INFO: Trying to get logs from node iruya-node pod pod-e9402b26-2e7a-437d-a4c3-4f5a3abac72f container test-container: STEP: delete the pod Dec 25 13:57:29.591: INFO: Waiting for pod pod-e9402b26-2e7a-437d-a4c3-4f5a3abac72f to disappear Dec 25 13:57:29.612: INFO: Pod pod-e9402b26-2e7a-437d-a4c3-4f5a3abac72f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:57:29.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4385" for this suite. Dec 25 13:57:35.714: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:57:35.976: INFO: namespace emptydir-4385 deletion completed in 6.288935337s • [SLOW TEST:16.654 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:57:35.976: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-e97f9d08-6123-48cc-ab02-c689abcab69d STEP: Creating a pod to test consume configMaps Dec 25 13:57:36.127: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9b6eb6bf-ab73-405c-bdb7-492a31d67f0e" in namespace "projected-1572" to be "success or failure" Dec 25 13:57:36.152: INFO: Pod "pod-projected-configmaps-9b6eb6bf-ab73-405c-bdb7-492a31d67f0e": Phase="Pending", Reason="", readiness=false. Elapsed: 24.469885ms Dec 25 13:57:38.166: INFO: Pod "pod-projected-configmaps-9b6eb6bf-ab73-405c-bdb7-492a31d67f0e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038530147s Dec 25 13:57:40.176: INFO: Pod "pod-projected-configmaps-9b6eb6bf-ab73-405c-bdb7-492a31d67f0e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047825878s Dec 25 13:57:42.195: INFO: Pod "pod-projected-configmaps-9b6eb6bf-ab73-405c-bdb7-492a31d67f0e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066820778s Dec 25 13:57:44.237: INFO: Pod "pod-projected-configmaps-9b6eb6bf-ab73-405c-bdb7-492a31d67f0e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.109647571s STEP: Saw pod success Dec 25 13:57:44.238: INFO: Pod "pod-projected-configmaps-9b6eb6bf-ab73-405c-bdb7-492a31d67f0e" satisfied condition "success or failure" Dec 25 13:57:44.248: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-9b6eb6bf-ab73-405c-bdb7-492a31d67f0e container projected-configmap-volume-test: STEP: delete the pod Dec 25 13:57:44.328: INFO: Waiting for pod pod-projected-configmaps-9b6eb6bf-ab73-405c-bdb7-492a31d67f0e to disappear Dec 25 13:57:44.333: INFO: Pod pod-projected-configmaps-9b6eb6bf-ab73-405c-bdb7-492a31d67f0e no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:57:44.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1572" for this suite. Dec 25 13:57:50.354: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:57:50.492: INFO: namespace projected-1572 deletion completed in 6.154071216s • [SLOW TEST:14.516 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:57:50.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 25 13:57:50.635: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Dec 25 13:57:50.769: INFO: stderr: "" Dec 25 13:57:50.769: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T16:55:20Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.1\", GitCommit:\"4485c6f18cee9a5d3c3b4e523bd27972b1b53892\", GitTreeState:\"clean\", BuildDate:\"2019-07-18T09:09:21Z\", GoVersion:\"go1.12.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:57:50.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6774" for this suite. Dec 25 13:57:57.264: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:57:57.488: INFO: namespace kubectl-6774 deletion completed in 6.709607652s • [SLOW TEST:6.996 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:57:57.488: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Dec 25 13:57:57.580: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-9191' Dec 25 13:57:57.743: INFO: stderr: "" Dec 25 13:57:57.744: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Dec 25 13:58:07.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-9191 -o json' Dec 25 13:58:07.916: INFO: stderr: "" Dec 25 13:58:07.916: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2019-12-25T13:57:57Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"kubectl-9191\",\n \"resourceVersion\": \"18021375\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-9191/pods/e2e-test-nginx-pod\",\n \"uid\": \"d1d50438-5c97-4881-b6e9-448d1eda0897\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-t4xg8\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"iruya-node\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-t4xg8\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-t4xg8\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-12-25T13:57:57Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-12-25T13:58:06Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-12-25T13:58:06Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-12-25T13:57:57Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"docker://6368e76683a14d6bdca08ce14b3922a903de7b93a454639a3c6f31bee5a233e6\",\n \"image\": \"nginx:1.14-alpine\",\n \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2019-12-25T13:58:04Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.96.3.65\",\n \"phase\": \"Running\",\n \"podIP\": \"10.44.0.1\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2019-12-25T13:57:57Z\"\n }\n}\n" STEP: replace the image in the pod Dec 25 13:58:07.917: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-9191' Dec 25 13:58:08.171: INFO: stderr: "" Dec 25 13:58:08.171: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726 Dec 25 13:58:08.178: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-9191' Dec 25 13:58:15.397: INFO: stderr: "" Dec 25 13:58:15.398: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:58:15.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9191" for this suite. Dec 25 13:58:21.471: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:58:21.694: INFO: namespace kubectl-9191 deletion completed in 6.280768054s • [SLOW TEST:24.205 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:58:21.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service multi-endpoint-test in namespace services-8246 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8246 to expose endpoints map[] Dec 25 13:58:21.898: INFO: successfully validated that service multi-endpoint-test in namespace services-8246 exposes endpoints map[] (40.510004ms elapsed) STEP: Creating pod pod1 in namespace services-8246 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8246 to expose endpoints map[pod1:[100]] Dec 25 13:58:26.044: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.090129848s elapsed, will retry) Dec 25 13:58:31.117: INFO: successfully validated that service multi-endpoint-test in namespace services-8246 exposes endpoints map[pod1:[100]] (9.162581037s elapsed) STEP: Creating pod pod2 in namespace services-8246 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8246 to expose endpoints map[pod1:[100] pod2:[101]] Dec 25 13:58:36.506: INFO: Unexpected endpoints: found map[82e1312d-b403-4bb3-907d-dd81eaf9690e:[100]], expected map[pod1:[100] pod2:[101]] (5.383018209s elapsed, will retry) Dec 25 13:58:38.557: INFO: successfully validated that service multi-endpoint-test in namespace services-8246 exposes endpoints map[pod1:[100] pod2:[101]] (7.434175746s elapsed) STEP: Deleting pod pod1 in namespace services-8246 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8246 to expose endpoints map[pod2:[101]] Dec 25 13:58:39.613: INFO: successfully validated that service multi-endpoint-test in namespace services-8246 exposes endpoints map[pod2:[101]] (1.04971225s elapsed) STEP: Deleting pod pod2 in namespace services-8246 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8246 to expose endpoints map[] Dec 25 13:58:40.855: INFO: successfully validated that service multi-endpoint-test in namespace services-8246 exposes endpoints map[] (1.225753916s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:58:41.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8246" for this suite. Dec 25 13:58:47.750: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:58:47.912: INFO: namespace services-8246 deletion completed in 6.278493018s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:26.219 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:58:47.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 25 13:58:48.058: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4dc301f6-cc49-42b6-b03f-1efc586d5e8d" in namespace "downward-api-8406" to be "success or failure" Dec 25 13:58:48.068: INFO: Pod "downwardapi-volume-4dc301f6-cc49-42b6-b03f-1efc586d5e8d": Phase="Pending", Reason="", readiness=false. Elapsed: 9.464144ms Dec 25 13:58:50.077: INFO: Pod "downwardapi-volume-4dc301f6-cc49-42b6-b03f-1efc586d5e8d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018658334s Dec 25 13:58:52.083: INFO: Pod "downwardapi-volume-4dc301f6-cc49-42b6-b03f-1efc586d5e8d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024801897s Dec 25 13:58:54.094: INFO: Pod "downwardapi-volume-4dc301f6-cc49-42b6-b03f-1efc586d5e8d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03573468s Dec 25 13:58:56.099: INFO: Pod "downwardapi-volume-4dc301f6-cc49-42b6-b03f-1efc586d5e8d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.041024312s Dec 25 13:58:58.110: INFO: Pod "downwardapi-volume-4dc301f6-cc49-42b6-b03f-1efc586d5e8d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.051859319s STEP: Saw pod success Dec 25 13:58:58.110: INFO: Pod "downwardapi-volume-4dc301f6-cc49-42b6-b03f-1efc586d5e8d" satisfied condition "success or failure" Dec 25 13:58:58.114: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-4dc301f6-cc49-42b6-b03f-1efc586d5e8d container client-container: STEP: delete the pod Dec 25 13:58:58.163: INFO: Waiting for pod downwardapi-volume-4dc301f6-cc49-42b6-b03f-1efc586d5e8d to disappear Dec 25 13:58:58.214: INFO: Pod downwardapi-volume-4dc301f6-cc49-42b6-b03f-1efc586d5e8d no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:58:58.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8406" for this suite. Dec 25 13:59:04.356: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 13:59:04.544: INFO: namespace downward-api-8406 deletion completed in 6.30278817s • [SLOW TEST:16.632 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 13:59:04.545: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating all guestbook components Dec 25 13:59:04.644: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Dec 25 13:59:04.644: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2996' Dec 25 13:59:04.991: INFO: stderr: "" Dec 25 13:59:04.991: INFO: stdout: "service/redis-slave created\n" Dec 25 13:59:04.992: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Dec 25 13:59:04.992: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2996' Dec 25 13:59:05.429: INFO: stderr: "" Dec 25 13:59:05.430: INFO: stdout: "service/redis-master created\n" Dec 25 13:59:05.431: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Dec 25 13:59:05.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2996' Dec 25 13:59:05.736: INFO: stderr: "" Dec 25 13:59:05.736: INFO: stdout: "service/frontend created\n" Dec 25 13:59:05.737: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Dec 25 13:59:05.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2996' Dec 25 13:59:06.065: INFO: stderr: "" Dec 25 13:59:06.066: INFO: stdout: "deployment.apps/frontend created\n" Dec 25 13:59:06.066: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-master spec: replicas: 1 selector: matchLabels: app: redis role: master tier: backend template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Dec 25 13:59:06.067: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2996' Dec 25 13:59:06.462: INFO: stderr: "" Dec 25 13:59:06.463: INFO: stdout: "deployment.apps/redis-master created\n" Dec 25 13:59:06.464: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 selector: matchLabels: app: redis role: slave tier: backend template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Dec 25 13:59:06.465: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2996' Dec 25 13:59:07.539: INFO: stderr: "" Dec 25 13:59:07.540: INFO: stdout: "deployment.apps/redis-slave created\n" STEP: validating guestbook app Dec 25 13:59:07.540: INFO: Waiting for all frontend pods to be Running. Dec 25 13:59:32.593: INFO: Waiting for frontend to serve content. Dec 25 13:59:32.792: INFO: Trying to add a new entry to the guestbook. Dec 25 13:59:32.861: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Dec 25 13:59:32.940: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2996' Dec 25 13:59:33.221: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 25 13:59:33.221: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Dec 25 13:59:33.222: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2996' Dec 25 13:59:33.375: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 25 13:59:33.376: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Dec 25 13:59:33.377: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2996' Dec 25 13:59:33.515: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 25 13:59:33.516: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Dec 25 13:59:33.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2996' Dec 25 13:59:33.632: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 25 13:59:33.632: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Dec 25 13:59:33.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2996' Dec 25 13:59:33.713: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 25 13:59:33.713: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n" STEP: using delete to clean up resources Dec 25 13:59:33.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2996' Dec 25 13:59:33.880: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 25 13:59:33.881: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 13:59:33.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2996" for this suite. Dec 25 14:00:21.934: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 14:00:22.026: INFO: namespace kubectl-2996 deletion completed in 48.135234572s • [SLOW TEST:77.480 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 14:00:22.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: executing a command with run --rm and attach with stdin Dec 25 14:00:22.094: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5005 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Dec 25 14:00:31.885: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\n" Dec 25 14:00:31.885: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 14:00:33.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5005" for this suite. Dec 25 14:00:39.967: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 14:00:40.134: INFO: namespace kubectl-5005 deletion completed in 6.209813418s • [SLOW TEST:18.108 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 14:00:40.135: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Starting the proxy Dec 25 14:00:40.206: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix393902351/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 14:00:40.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2003" for this suite. Dec 25 14:00:46.319: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 14:00:46.461: INFO: namespace kubectl-2003 deletion completed in 6.184274608s • [SLOW TEST:6.327 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 14:00:46.462: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-f0e5c12c-5640-4dfb-a20e-5abfd38bab9e STEP: Creating a pod to test consume secrets Dec 25 14:00:46.611: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2d4135b8-9ac0-4222-aa51-9e4e0921ef87" in namespace "projected-7720" to be "success or failure" Dec 25 14:00:46.646: INFO: Pod "pod-projected-secrets-2d4135b8-9ac0-4222-aa51-9e4e0921ef87": Phase="Pending", Reason="", readiness=false. Elapsed: 34.725141ms Dec 25 14:00:48.658: INFO: Pod "pod-projected-secrets-2d4135b8-9ac0-4222-aa51-9e4e0921ef87": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045993291s Dec 25 14:00:50.666: INFO: Pod "pod-projected-secrets-2d4135b8-9ac0-4222-aa51-9e4e0921ef87": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054303695s Dec 25 14:00:52.679: INFO: Pod "pod-projected-secrets-2d4135b8-9ac0-4222-aa51-9e4e0921ef87": Phase="Pending", Reason="", readiness=false. Elapsed: 6.067241229s Dec 25 14:00:54.691: INFO: Pod "pod-projected-secrets-2d4135b8-9ac0-4222-aa51-9e4e0921ef87": Phase="Pending", Reason="", readiness=false. Elapsed: 8.079682672s Dec 25 14:00:56.701: INFO: Pod "pod-projected-secrets-2d4135b8-9ac0-4222-aa51-9e4e0921ef87": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.08891486s STEP: Saw pod success Dec 25 14:00:56.701: INFO: Pod "pod-projected-secrets-2d4135b8-9ac0-4222-aa51-9e4e0921ef87" satisfied condition "success or failure" Dec 25 14:00:56.706: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-2d4135b8-9ac0-4222-aa51-9e4e0921ef87 container projected-secret-volume-test: STEP: delete the pod Dec 25 14:00:56.780: INFO: Waiting for pod pod-projected-secrets-2d4135b8-9ac0-4222-aa51-9e4e0921ef87 to disappear Dec 25 14:00:56.791: INFO: Pod pod-projected-secrets-2d4135b8-9ac0-4222-aa51-9e4e0921ef87 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 14:00:56.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7720" for this suite. Dec 25 14:01:02.825: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 14:01:02.933: INFO: namespace projected-7720 deletion completed in 6.133126176s • [SLOW TEST:16.471 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 14:01:02.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 25 14:01:03.101: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9559e5d3-4507-44a9-bfa5-62aef31b5bcd" in namespace "downward-api-9700" to be "success or failure" Dec 25 14:01:03.225: INFO: Pod "downwardapi-volume-9559e5d3-4507-44a9-bfa5-62aef31b5bcd": Phase="Pending", Reason="", readiness=false. Elapsed: 124.740156ms Dec 25 14:01:05.234: INFO: Pod "downwardapi-volume-9559e5d3-4507-44a9-bfa5-62aef31b5bcd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.13305284s Dec 25 14:01:07.245: INFO: Pod "downwardapi-volume-9559e5d3-4507-44a9-bfa5-62aef31b5bcd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.143919525s Dec 25 14:01:09.252: INFO: Pod "downwardapi-volume-9559e5d3-4507-44a9-bfa5-62aef31b5bcd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.15078209s Dec 25 14:01:11.261: INFO: Pod "downwardapi-volume-9559e5d3-4507-44a9-bfa5-62aef31b5bcd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.160295873s Dec 25 14:01:13.270: INFO: Pod "downwardapi-volume-9559e5d3-4507-44a9-bfa5-62aef31b5bcd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.169486962s STEP: Saw pod success Dec 25 14:01:13.270: INFO: Pod "downwardapi-volume-9559e5d3-4507-44a9-bfa5-62aef31b5bcd" satisfied condition "success or failure" Dec 25 14:01:13.274: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-9559e5d3-4507-44a9-bfa5-62aef31b5bcd container client-container: STEP: delete the pod Dec 25 14:01:13.760: INFO: Waiting for pod downwardapi-volume-9559e5d3-4507-44a9-bfa5-62aef31b5bcd to disappear Dec 25 14:01:14.788: INFO: Pod downwardapi-volume-9559e5d3-4507-44a9-bfa5-62aef31b5bcd no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 25 14:01:14.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9700" for this suite. Dec 25 14:01:20.822: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 25 14:01:20.985: INFO: namespace downward-api-9700 deletion completed in 6.185315548s • [SLOW TEST:18.051 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 25 14:01:20.985: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 25 14:01:21.097: INFO: (0) /api/v1/nodes/iruya-node:10250/proxy/logs/:
alternatives.log
alternatives.l... (200; 15.368394ms)
Dec 25 14:01:21.102: INFO: (1) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.230023ms)
Dec 25 14:01:21.108: INFO: (2) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.656783ms)
Dec 25 14:01:21.112: INFO: (3) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.474933ms)
Dec 25 14:01:21.117: INFO: (4) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.113467ms)
Dec 25 14:01:21.123: INFO: (5) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.268636ms)
Dec 25 14:01:21.131: INFO: (6) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.583448ms)
Dec 25 14:01:21.137: INFO: (7) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.346714ms)
Dec 25 14:01:21.141: INFO: (8) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.995818ms)
Dec 25 14:01:21.145: INFO: (9) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.553954ms)
Dec 25 14:01:21.149: INFO: (10) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.610198ms)
Dec 25 14:01:21.155: INFO: (11) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.555095ms)
Dec 25 14:01:21.165: INFO: (12) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.671754ms)
Dec 25 14:01:21.177: INFO: (13) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.623976ms)
Dec 25 14:01:21.216: INFO: (14) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 39.600153ms)
Dec 25 14:01:21.225: INFO: (15) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.146248ms)
Dec 25 14:01:21.232: INFO: (16) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.034843ms)
Dec 25 14:01:21.238: INFO: (17) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.744619ms)
Dec 25 14:01:21.242: INFO: (18) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.747077ms)
Dec 25 14:01:21.248: INFO: (19) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.920854ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:01:21.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-708" for this suite.
Dec 25 14:01:27.292: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:01:27.418: INFO: namespace proxy-708 deletion completed in 6.164469419s

• [SLOW TEST:6.433 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:01:27.419: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Dec 25 14:01:45.662: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 25 14:01:45.672: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 25 14:01:47.673: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 25 14:01:47.680: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 25 14:01:49.673: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 25 14:01:49.686: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 25 14:01:51.673: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 25 14:01:51.683: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 25 14:01:53.673: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 25 14:01:53.712: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 25 14:01:55.673: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 25 14:01:55.681: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 25 14:01:57.673: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 25 14:01:57.682: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 25 14:01:59.673: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 25 14:01:59.685: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 25 14:02:01.673: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 25 14:02:01.681: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 25 14:02:03.673: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 25 14:02:03.701: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 25 14:02:05.673: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 25 14:02:05.699: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 25 14:02:07.673: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 25 14:02:07.717: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:02:07.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-4614" for this suite.
Dec 25 14:02:29.893: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:02:30.018: INFO: namespace container-lifecycle-hook-4614 deletion completed in 22.229949866s

• [SLOW TEST:62.600 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:02:30.019: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 25 14:02:30.130: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Dec 25 14:02:33.193: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:02:33.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-2887" for this suite.
Dec 25 14:02:43.422: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:02:43.511: INFO: namespace replication-controller-2887 deletion completed in 10.243206499s

• [SLOW TEST:13.492 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:02:43.511: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
Dec 25 14:02:44.322: INFO: created pod pod-service-account-defaultsa
Dec 25 14:02:44.322: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Dec 25 14:02:44.338: INFO: created pod pod-service-account-mountsa
Dec 25 14:02:44.339: INFO: pod pod-service-account-mountsa service account token volume mount: true
Dec 25 14:02:44.363: INFO: created pod pod-service-account-nomountsa
Dec 25 14:02:44.363: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Dec 25 14:02:44.601: INFO: created pod pod-service-account-defaultsa-mountspec
Dec 25 14:02:44.602: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Dec 25 14:02:44.679: INFO: created pod pod-service-account-mountsa-mountspec
Dec 25 14:02:44.679: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Dec 25 14:02:44.764: INFO: created pod pod-service-account-nomountsa-mountspec
Dec 25 14:02:44.764: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Dec 25 14:02:45.685: INFO: created pod pod-service-account-defaultsa-nomountspec
Dec 25 14:02:45.686: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Dec 25 14:02:45.712: INFO: created pod pod-service-account-mountsa-nomountspec
Dec 25 14:02:45.713: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Dec 25 14:02:46.109: INFO: created pod pod-service-account-nomountsa-nomountspec
Dec 25 14:02:46.109: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:02:46.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-1899" for this suite.
Dec 25 14:03:28.384: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:03:28.601: INFO: namespace svcaccounts-1899 deletion completed in 42.476419514s

• [SLOW TEST:45.090 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:03:28.601: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:04:20.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-7607" for this suite.
Dec 25 14:04:26.792: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:04:27.061: INFO: namespace container-runtime-7607 deletion completed in 6.404006335s

• [SLOW TEST:58.460 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:04:27.062: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override all
Dec 25 14:04:27.227: INFO: Waiting up to 5m0s for pod "client-containers-8593aba4-0e52-41a9-b894-daa2c047e323" in namespace "containers-5420" to be "success or failure"
Dec 25 14:04:27.238: INFO: Pod "client-containers-8593aba4-0e52-41a9-b894-daa2c047e323": Phase="Pending", Reason="", readiness=false. Elapsed: 10.492075ms
Dec 25 14:04:29.254: INFO: Pod "client-containers-8593aba4-0e52-41a9-b894-daa2c047e323": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026148462s
Dec 25 14:04:31.261: INFO: Pod "client-containers-8593aba4-0e52-41a9-b894-daa2c047e323": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033982642s
Dec 25 14:04:33.285: INFO: Pod "client-containers-8593aba4-0e52-41a9-b894-daa2c047e323": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057594414s
Dec 25 14:04:35.296: INFO: Pod "client-containers-8593aba4-0e52-41a9-b894-daa2c047e323": Phase="Pending", Reason="", readiness=false. Elapsed: 8.068436374s
Dec 25 14:04:37.304: INFO: Pod "client-containers-8593aba4-0e52-41a9-b894-daa2c047e323": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.076719768s
STEP: Saw pod success
Dec 25 14:04:37.304: INFO: Pod "client-containers-8593aba4-0e52-41a9-b894-daa2c047e323" satisfied condition "success or failure"
Dec 25 14:04:37.310: INFO: Trying to get logs from node iruya-node pod client-containers-8593aba4-0e52-41a9-b894-daa2c047e323 container test-container: 
STEP: delete the pod
Dec 25 14:04:37.497: INFO: Waiting for pod client-containers-8593aba4-0e52-41a9-b894-daa2c047e323 to disappear
Dec 25 14:04:37.505: INFO: Pod client-containers-8593aba4-0e52-41a9-b894-daa2c047e323 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:04:37.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-5420" for this suite.
Dec 25 14:04:43.536: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:04:43.670: INFO: namespace containers-5420 deletion completed in 6.159247082s

• [SLOW TEST:16.608 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:04:43.671: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Dec 25 14:04:53.829: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-d3bad7b2-bd1d-41c1-997d-e2ae8a51887b -c busybox-main-container --namespace=emptydir-1120 -- cat /usr/share/volumeshare/shareddata.txt'
Dec 25 14:04:56.068: INFO: stderr: ""
Dec 25 14:04:56.068: INFO: stdout: "Hello from the busy-box sub-container\n"
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:04:56.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1120" for this suite.
Dec 25 14:05:02.123: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:05:02.276: INFO: namespace emptydir-1120 deletion completed in 6.19886878s

• [SLOW TEST:18.605 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:05:02.276: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W1225 14:05:02.988206       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 25 14:05:02.988: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:05:02.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8342" for this suite.
Dec 25 14:05:11.150: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:05:11.280: INFO: namespace gc-8342 deletion completed in 8.284822916s

• [SLOW TEST:9.004 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:05:11.280: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap that has name configmap-test-emptyKey-0a479682-64a9-4c4c-b711-4f5e52bdd035
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:05:11.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4596" for this suite.
Dec 25 14:05:17.464: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:05:17.665: INFO: namespace configmap-4596 deletion completed in 6.23511807s

• [SLOW TEST:6.385 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:05:17.666: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-2511
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Dec 25 14:05:17.804: INFO: Found 0 stateful pods, waiting for 3
Dec 25 14:05:27.817: INFO: Found 2 stateful pods, waiting for 3
Dec 25 14:05:37.819: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 25 14:05:37.819: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 25 14:05:37.819: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 25 14:05:47.914: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 25 14:05:47.914: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 25 14:05:47.914: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Dec 25 14:05:47.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2511 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 25 14:05:48.274: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 25 14:05:48.274: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 25 14:05:48.274: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Dec 25 14:05:48.416: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Dec 25 14:05:58.456: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2511 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 25 14:05:59.009: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Dec 25 14:05:59.009: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 25 14:05:59.009: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 25 14:06:09.052: INFO: Waiting for StatefulSet statefulset-2511/ss2 to complete update
Dec 25 14:06:09.052: INFO: Waiting for Pod statefulset-2511/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 25 14:06:09.052: INFO: Waiting for Pod statefulset-2511/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 25 14:06:19.140: INFO: Waiting for StatefulSet statefulset-2511/ss2 to complete update
Dec 25 14:06:19.140: INFO: Waiting for Pod statefulset-2511/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 25 14:06:29.068: INFO: Waiting for StatefulSet statefulset-2511/ss2 to complete update
Dec 25 14:06:29.069: INFO: Waiting for Pod statefulset-2511/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 25 14:06:39.068: INFO: Waiting for StatefulSet statefulset-2511/ss2 to complete update
STEP: Rolling back to a previous revision
Dec 25 14:06:49.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2511 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 25 14:06:49.572: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 25 14:06:49.572: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 25 14:06:49.572: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 25 14:06:59.631: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Dec 25 14:07:09.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2511 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 25 14:07:10.251: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Dec 25 14:07:10.251: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 25 14:07:10.251: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 25 14:07:20.293: INFO: Waiting for StatefulSet statefulset-2511/ss2 to complete update
Dec 25 14:07:20.293: INFO: Waiting for Pod statefulset-2511/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 25 14:07:20.293: INFO: Waiting for Pod statefulset-2511/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 25 14:07:20.293: INFO: Waiting for Pod statefulset-2511/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 25 14:07:30.336: INFO: Waiting for StatefulSet statefulset-2511/ss2 to complete update
Dec 25 14:07:30.337: INFO: Waiting for Pod statefulset-2511/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 25 14:07:30.337: INFO: Waiting for Pod statefulset-2511/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 25 14:07:40.308: INFO: Waiting for StatefulSet statefulset-2511/ss2 to complete update
Dec 25 14:07:40.308: INFO: Waiting for Pod statefulset-2511/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 25 14:07:50.315: INFO: Waiting for StatefulSet statefulset-2511/ss2 to complete update
Dec 25 14:07:50.315: INFO: Waiting for Pod statefulset-2511/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 25 14:08:00.328: INFO: Waiting for StatefulSet statefulset-2511/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Dec 25 14:08:10.308: INFO: Deleting all statefulset in ns statefulset-2511
Dec 25 14:08:10.313: INFO: Scaling statefulset ss2 to 0
Dec 25 14:08:50.368: INFO: Waiting for statefulset status.replicas updated to 0
Dec 25 14:08:50.372: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:08:50.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-2511" for this suite.
Dec 25 14:08:58.424: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:08:58.554: INFO: namespace statefulset-2511 deletion completed in 8.154433485s

• [SLOW TEST:220.889 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:08:58.556: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test use defaults
Dec 25 14:08:58.703: INFO: Waiting up to 5m0s for pod "client-containers-4887780f-ff2c-44ca-baab-57ea795145e0" in namespace "containers-5697" to be "success or failure"
Dec 25 14:08:58.747: INFO: Pod "client-containers-4887780f-ff2c-44ca-baab-57ea795145e0": Phase="Pending", Reason="", readiness=false. Elapsed: 43.409836ms
Dec 25 14:09:01.384: INFO: Pod "client-containers-4887780f-ff2c-44ca-baab-57ea795145e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.680535043s
Dec 25 14:09:03.393: INFO: Pod "client-containers-4887780f-ff2c-44ca-baab-57ea795145e0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.689842763s
Dec 25 14:09:05.403: INFO: Pod "client-containers-4887780f-ff2c-44ca-baab-57ea795145e0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.699725931s
Dec 25 14:09:07.411: INFO: Pod "client-containers-4887780f-ff2c-44ca-baab-57ea795145e0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.707501548s
Dec 25 14:09:09.419: INFO: Pod "client-containers-4887780f-ff2c-44ca-baab-57ea795145e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.715611879s
STEP: Saw pod success
Dec 25 14:09:09.419: INFO: Pod "client-containers-4887780f-ff2c-44ca-baab-57ea795145e0" satisfied condition "success or failure"
Dec 25 14:09:09.425: INFO: Trying to get logs from node iruya-node pod client-containers-4887780f-ff2c-44ca-baab-57ea795145e0 container test-container: 
STEP: delete the pod
Dec 25 14:09:09.487: INFO: Waiting for pod client-containers-4887780f-ff2c-44ca-baab-57ea795145e0 to disappear
Dec 25 14:09:09.501: INFO: Pod client-containers-4887780f-ff2c-44ca-baab-57ea795145e0 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:09:09.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-5697" for this suite.
Dec 25 14:09:15.559: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:09:15.752: INFO: namespace containers-5697 deletion completed in 6.238440084s

• [SLOW TEST:17.196 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:09:15.752: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating server pod server in namespace prestop-2291
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-2291
STEP: Deleting pre-stop pod
Dec 25 14:09:41.114: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:09:41.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-2291" for this suite.
Dec 25 14:10:19.190: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:10:19.334: INFO: namespace prestop-2291 deletion completed in 38.168982549s

• [SLOW TEST:63.581 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:10:19.334: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Dec 25 14:10:19.442: INFO: Pod name pod-release: Found 0 pods out of 1
Dec 25 14:10:24.463: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:10:25.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-7198" for this suite.
Dec 25 14:10:31.731: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:10:31.929: INFO: namespace replication-controller-7198 deletion completed in 6.252769626s

• [SLOW TEST:12.595 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:10:31.930: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override command
Dec 25 14:10:32.140: INFO: Waiting up to 5m0s for pod "client-containers-f11e552f-261d-47d1-aadc-70f23c472b0f" in namespace "containers-3978" to be "success or failure"
Dec 25 14:10:32.205: INFO: Pod "client-containers-f11e552f-261d-47d1-aadc-70f23c472b0f": Phase="Pending", Reason="", readiness=false. Elapsed: 64.524023ms
Dec 25 14:10:34.219: INFO: Pod "client-containers-f11e552f-261d-47d1-aadc-70f23c472b0f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078330754s
Dec 25 14:10:36.232: INFO: Pod "client-containers-f11e552f-261d-47d1-aadc-70f23c472b0f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.091325464s
Dec 25 14:10:38.240: INFO: Pod "client-containers-f11e552f-261d-47d1-aadc-70f23c472b0f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.099485482s
Dec 25 14:10:40.252: INFO: Pod "client-containers-f11e552f-261d-47d1-aadc-70f23c472b0f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.111320535s
Dec 25 14:10:42.258: INFO: Pod "client-containers-f11e552f-261d-47d1-aadc-70f23c472b0f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.117834936s
Dec 25 14:10:44.280: INFO: Pod "client-containers-f11e552f-261d-47d1-aadc-70f23c472b0f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.139806547s
Dec 25 14:10:46.291: INFO: Pod "client-containers-f11e552f-261d-47d1-aadc-70f23c472b0f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.15010867s
STEP: Saw pod success
Dec 25 14:10:46.291: INFO: Pod "client-containers-f11e552f-261d-47d1-aadc-70f23c472b0f" satisfied condition "success or failure"
Dec 25 14:10:46.295: INFO: Trying to get logs from node iruya-node pod client-containers-f11e552f-261d-47d1-aadc-70f23c472b0f container test-container: 
STEP: delete the pod
Dec 25 14:10:46.354: INFO: Waiting for pod client-containers-f11e552f-261d-47d1-aadc-70f23c472b0f to disappear
Dec 25 14:10:46.363: INFO: Pod client-containers-f11e552f-261d-47d1-aadc-70f23c472b0f no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:10:46.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-3978" for this suite.
Dec 25 14:10:52.454: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:10:52.546: INFO: namespace containers-3978 deletion completed in 6.157977412s

• [SLOW TEST:20.616 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:10:52.547: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-1890
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 25 14:10:52.670: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 25 14:11:35.050: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-1890 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 25 14:11:35.050: INFO: >>> kubeConfig: /root/.kube/config
Dec 25 14:11:35.499: INFO: Found all expected endpoints: [netserver-0]
Dec 25 14:11:35.508: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-1890 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 25 14:11:35.508: INFO: >>> kubeConfig: /root/.kube/config
Dec 25 14:11:35.823: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:11:35.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-1890" for this suite.
Dec 25 14:11:59.862: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:12:00.013: INFO: namespace pod-network-test-1890 deletion completed in 24.178213388s

• [SLOW TEST:67.467 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:12:00.014: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-cc08aca3-f3a2-4961-95d3-2bcb2e65e781
STEP: Creating a pod to test consume secrets
Dec 25 14:12:00.242: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-dd763dfa-2a95-4dbd-be3b-746a5aa36bce" in namespace "projected-4239" to be "success or failure"
Dec 25 14:12:00.249: INFO: Pod "pod-projected-secrets-dd763dfa-2a95-4dbd-be3b-746a5aa36bce": Phase="Pending", Reason="", readiness=false. Elapsed: 6.078494ms
Dec 25 14:12:02.255: INFO: Pod "pod-projected-secrets-dd763dfa-2a95-4dbd-be3b-746a5aa36bce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012536418s
Dec 25 14:12:04.268: INFO: Pod "pod-projected-secrets-dd763dfa-2a95-4dbd-be3b-746a5aa36bce": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02478128s
Dec 25 14:12:06.286: INFO: Pod "pod-projected-secrets-dd763dfa-2a95-4dbd-be3b-746a5aa36bce": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042925045s
Dec 25 14:12:08.294: INFO: Pod "pod-projected-secrets-dd763dfa-2a95-4dbd-be3b-746a5aa36bce": Phase="Pending", Reason="", readiness=false. Elapsed: 8.050949858s
Dec 25 14:12:10.303: INFO: Pod "pod-projected-secrets-dd763dfa-2a95-4dbd-be3b-746a5aa36bce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.060704537s
STEP: Saw pod success
Dec 25 14:12:10.304: INFO: Pod "pod-projected-secrets-dd763dfa-2a95-4dbd-be3b-746a5aa36bce" satisfied condition "success or failure"
Dec 25 14:12:10.309: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-dd763dfa-2a95-4dbd-be3b-746a5aa36bce container projected-secret-volume-test: 
STEP: delete the pod
Dec 25 14:12:10.387: INFO: Waiting for pod pod-projected-secrets-dd763dfa-2a95-4dbd-be3b-746a5aa36bce to disappear
Dec 25 14:12:10.397: INFO: Pod pod-projected-secrets-dd763dfa-2a95-4dbd-be3b-746a5aa36bce no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:12:10.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4239" for this suite.
Dec 25 14:12:16.475: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:12:16.636: INFO: namespace projected-4239 deletion completed in 6.230185237s

• [SLOW TEST:16.623 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:12:16.637: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 25 14:12:17.735: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Dec 25 14:12:17.758: INFO: Number of nodes with available pods: 0
Dec 25 14:12:17.758: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:12:19.206: INFO: Number of nodes with available pods: 0
Dec 25 14:12:19.206: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:12:20.043: INFO: Number of nodes with available pods: 0
Dec 25 14:12:20.043: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:12:20.777: INFO: Number of nodes with available pods: 0
Dec 25 14:12:20.777: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:12:21.857: INFO: Number of nodes with available pods: 0
Dec 25 14:12:21.858: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:12:23.631: INFO: Number of nodes with available pods: 0
Dec 25 14:12:23.631: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:12:23.981: INFO: Number of nodes with available pods: 0
Dec 25 14:12:23.981: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:12:24.777: INFO: Number of nodes with available pods: 0
Dec 25 14:12:24.777: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:12:25.812: INFO: Number of nodes with available pods: 0
Dec 25 14:12:25.812: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:12:26.781: INFO: Number of nodes with available pods: 1
Dec 25 14:12:26.781: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:12:27.810: INFO: Number of nodes with available pods: 1
Dec 25 14:12:27.811: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:12:28.779: INFO: Number of nodes with available pods: 2
Dec 25 14:12:28.779: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Dec 25 14:12:28.828: INFO: Wrong image for pod: daemon-set-4ftlq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 25 14:12:28.828: INFO: Wrong image for pod: daemon-set-gxwzk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 25 14:12:29.901: INFO: Wrong image for pod: daemon-set-4ftlq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 25 14:12:29.901: INFO: Wrong image for pod: daemon-set-gxwzk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 25 14:12:30.840: INFO: Wrong image for pod: daemon-set-4ftlq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 25 14:12:30.841: INFO: Wrong image for pod: daemon-set-gxwzk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 25 14:12:31.843: INFO: Wrong image for pod: daemon-set-4ftlq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 25 14:12:31.843: INFO: Wrong image for pod: daemon-set-gxwzk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 25 14:12:32.840: INFO: Wrong image for pod: daemon-set-4ftlq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 25 14:12:32.841: INFO: Wrong image for pod: daemon-set-gxwzk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 25 14:12:32.841: INFO: Pod daemon-set-gxwzk is not available
Dec 25 14:12:33.844: INFO: Wrong image for pod: daemon-set-4ftlq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 25 14:12:33.844: INFO: Wrong image for pod: daemon-set-gxwzk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 25 14:12:33.844: INFO: Pod daemon-set-gxwzk is not available
Dec 25 14:12:34.839: INFO: Wrong image for pod: daemon-set-4ftlq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 25 14:12:34.839: INFO: Wrong image for pod: daemon-set-gxwzk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 25 14:12:34.839: INFO: Pod daemon-set-gxwzk is not available
Dec 25 14:12:35.843: INFO: Wrong image for pod: daemon-set-4ftlq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 25 14:12:35.843: INFO: Wrong image for pod: daemon-set-gxwzk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 25 14:12:35.843: INFO: Pod daemon-set-gxwzk is not available
Dec 25 14:12:36.843: INFO: Wrong image for pod: daemon-set-4ftlq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 25 14:12:36.843: INFO: Wrong image for pod: daemon-set-gxwzk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 25 14:12:36.843: INFO: Pod daemon-set-gxwzk is not available
Dec 25 14:12:37.887: INFO: Wrong image for pod: daemon-set-4ftlq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 25 14:12:37.887: INFO: Wrong image for pod: daemon-set-gxwzk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 25 14:12:37.887: INFO: Pod daemon-set-gxwzk is not available
Dec 25 14:12:38.850: INFO: Wrong image for pod: daemon-set-4ftlq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 25 14:12:38.851: INFO: Pod daemon-set-vlcpd is not available
Dec 25 14:12:39.876: INFO: Wrong image for pod: daemon-set-4ftlq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 25 14:12:39.876: INFO: Pod daemon-set-vlcpd is not available
Dec 25 14:12:40.850: INFO: Wrong image for pod: daemon-set-4ftlq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 25 14:12:40.850: INFO: Pod daemon-set-vlcpd is not available
Dec 25 14:12:42.277: INFO: Wrong image for pod: daemon-set-4ftlq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 25 14:12:42.278: INFO: Pod daemon-set-vlcpd is not available
Dec 25 14:12:42.899: INFO: Wrong image for pod: daemon-set-4ftlq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 25 14:12:42.899: INFO: Pod daemon-set-vlcpd is not available
Dec 25 14:12:43.842: INFO: Wrong image for pod: daemon-set-4ftlq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 25 14:12:43.842: INFO: Pod daemon-set-vlcpd is not available
Dec 25 14:12:44.839: INFO: Wrong image for pod: daemon-set-4ftlq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 25 14:12:44.839: INFO: Pod daemon-set-vlcpd is not available
Dec 25 14:12:45.861: INFO: Wrong image for pod: daemon-set-4ftlq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 25 14:12:46.846: INFO: Wrong image for pod: daemon-set-4ftlq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 25 14:12:47.847: INFO: Wrong image for pod: daemon-set-4ftlq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 25 14:12:48.843: INFO: Wrong image for pod: daemon-set-4ftlq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 25 14:12:49.844: INFO: Wrong image for pod: daemon-set-4ftlq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 25 14:12:50.844: INFO: Wrong image for pod: daemon-set-4ftlq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 25 14:12:50.844: INFO: Pod daemon-set-4ftlq is not available
Dec 25 14:12:51.851: INFO: Wrong image for pod: daemon-set-4ftlq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 25 14:12:51.851: INFO: Pod daemon-set-4ftlq is not available
Dec 25 14:12:52.847: INFO: Wrong image for pod: daemon-set-4ftlq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 25 14:12:52.847: INFO: Pod daemon-set-4ftlq is not available
Dec 25 14:12:53.844: INFO: Wrong image for pod: daemon-set-4ftlq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 25 14:12:53.845: INFO: Pod daemon-set-4ftlq is not available
Dec 25 14:12:54.843: INFO: Wrong image for pod: daemon-set-4ftlq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 25 14:12:54.843: INFO: Pod daemon-set-4ftlq is not available
Dec 25 14:12:55.849: INFO: Wrong image for pod: daemon-set-4ftlq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 25 14:12:55.849: INFO: Pod daemon-set-4ftlq is not available
Dec 25 14:12:56.849: INFO: Pod daemon-set-jwbks is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Dec 25 14:12:56.916: INFO: Number of nodes with available pods: 1
Dec 25 14:12:56.916: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:12:57.940: INFO: Number of nodes with available pods: 1
Dec 25 14:12:57.940: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:12:58.940: INFO: Number of nodes with available pods: 1
Dec 25 14:12:58.940: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:12:59.936: INFO: Number of nodes with available pods: 1
Dec 25 14:12:59.936: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:13:00.926: INFO: Number of nodes with available pods: 1
Dec 25 14:13:00.926: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:13:01.949: INFO: Number of nodes with available pods: 1
Dec 25 14:13:01.949: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:13:02.935: INFO: Number of nodes with available pods: 1
Dec 25 14:13:02.935: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:13:03.941: INFO: Number of nodes with available pods: 2
Dec 25 14:13:03.942: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8239, will wait for the garbage collector to delete the pods
Dec 25 14:13:04.036: INFO: Deleting DaemonSet.extensions daemon-set took: 22.821688ms
Dec 25 14:13:04.337: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.75087ms
Dec 25 14:13:16.653: INFO: Number of nodes with available pods: 0
Dec 25 14:13:16.653: INFO: Number of running nodes: 0, number of available pods: 0
Dec 25 14:13:16.659: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8239/daemonsets","resourceVersion":"18023953"},"items":null}

Dec 25 14:13:16.662: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8239/pods","resourceVersion":"18023953"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:13:16.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-8239" for this suite.
Dec 25 14:13:22.736: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:13:22.916: INFO: namespace daemonsets-8239 deletion completed in 6.197796478s

• [SLOW TEST:66.280 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:13:22.917: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Dec 25 14:16:23.578: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 25 14:16:23.592: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 25 14:16:25.593: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 25 14:16:25.601: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 25 14:16:27.593: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 25 14:16:27.601: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 25 14:16:29.593: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 25 14:16:29.601: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 25 14:16:31.593: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 25 14:16:31.611: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 25 14:16:33.593: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 25 14:16:33.606: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 25 14:16:35.593: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 25 14:16:35.603: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 25 14:16:37.593: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 25 14:16:37.602: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 25 14:16:39.593: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 25 14:16:39.603: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 25 14:16:41.593: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 25 14:16:41.602: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 25 14:16:43.593: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 25 14:16:43.605: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 25 14:16:45.593: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 25 14:16:45.609: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 25 14:16:47.593: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 25 14:16:47.603: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 25 14:16:49.593: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 25 14:16:49.602: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 25 14:16:51.593: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 25 14:16:51.603: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 25 14:16:53.593: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 25 14:16:53.610: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 25 14:16:55.593: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 25 14:16:55.602: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 25 14:16:57.593: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 25 14:16:57.604: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 25 14:16:59.593: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 25 14:16:59.606: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 25 14:17:01.593: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 25 14:17:01.606: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 25 14:17:03.593: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 25 14:17:03.606: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 25 14:17:05.593: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 25 14:17:05.603: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 25 14:17:07.593: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 25 14:17:07.601: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 25 14:17:09.593: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 25 14:17:09.601: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 25 14:17:11.593: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 25 14:17:11.602: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 25 14:17:13.593: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 25 14:17:13.608: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 25 14:17:15.593: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 25 14:17:15.601: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 25 14:17:17.593: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 25 14:17:17.600: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 25 14:17:19.593: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 25 14:17:19.604: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 25 14:17:21.593: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 25 14:17:21.602: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 25 14:17:23.593: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 25 14:17:23.608: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 25 14:17:25.593: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 25 14:17:25.601: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 25 14:17:27.593: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 25 14:17:27.603: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 25 14:17:29.593: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 25 14:17:29.601: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 25 14:17:31.593: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 25 14:17:31.603: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 25 14:17:33.593: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 25 14:17:33.607: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 25 14:17:35.593: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 25 14:17:35.602: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 25 14:17:37.593: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 25 14:17:37.605: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 25 14:17:39.593: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 25 14:17:39.603: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 25 14:17:41.593: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 25 14:17:41.605: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 25 14:17:43.593: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 25 14:17:43.603: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 25 14:17:45.593: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 25 14:17:45.599: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 25 14:17:47.593: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 25 14:17:47.602: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 25 14:17:49.593: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 25 14:17:49.601: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 25 14:17:51.593: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 25 14:17:51.603: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 25 14:17:53.593: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 25 14:17:53.611: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 25 14:17:55.593: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 25 14:17:55.603: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 25 14:17:57.593: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 25 14:17:57.601: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:17:57.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-8324" for this suite.
Dec 25 14:18:19.664: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:18:19.932: INFO: namespace container-lifecycle-hook-8324 deletion completed in 22.32380768s

• [SLOW TEST:297.015 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:18:19.933: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Dec 25 14:18:20.061: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:18:36.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-8214" for this suite.
Dec 25 14:18:58.936: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:18:59.015: INFO: namespace init-container-8214 deletion completed in 22.103068714s

• [SLOW TEST:39.082 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:18:59.016: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-ecda7bbd-8268-4073-a30d-3397aaf321d9
STEP: Creating a pod to test consume secrets
Dec 25 14:18:59.096: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-bd105420-9497-46c0-bf7d-27c24ebd8689" in namespace "projected-9331" to be "success or failure"
Dec 25 14:18:59.105: INFO: Pod "pod-projected-secrets-bd105420-9497-46c0-bf7d-27c24ebd8689": Phase="Pending", Reason="", readiness=false. Elapsed: 8.370308ms
Dec 25 14:19:01.121: INFO: Pod "pod-projected-secrets-bd105420-9497-46c0-bf7d-27c24ebd8689": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024191509s
Dec 25 14:19:03.138: INFO: Pod "pod-projected-secrets-bd105420-9497-46c0-bf7d-27c24ebd8689": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041381808s
Dec 25 14:19:05.145: INFO: Pod "pod-projected-secrets-bd105420-9497-46c0-bf7d-27c24ebd8689": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048772385s
Dec 25 14:19:07.152: INFO: Pod "pod-projected-secrets-bd105420-9497-46c0-bf7d-27c24ebd8689": Phase="Pending", Reason="", readiness=false. Elapsed: 8.056150592s
Dec 25 14:19:09.161: INFO: Pod "pod-projected-secrets-bd105420-9497-46c0-bf7d-27c24ebd8689": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.065006939s
STEP: Saw pod success
Dec 25 14:19:09.161: INFO: Pod "pod-projected-secrets-bd105420-9497-46c0-bf7d-27c24ebd8689" satisfied condition "success or failure"
Dec 25 14:19:09.166: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-bd105420-9497-46c0-bf7d-27c24ebd8689 container projected-secret-volume-test: 
STEP: delete the pod
Dec 25 14:19:09.255: INFO: Waiting for pod pod-projected-secrets-bd105420-9497-46c0-bf7d-27c24ebd8689 to disappear
Dec 25 14:19:09.346: INFO: Pod pod-projected-secrets-bd105420-9497-46c0-bf7d-27c24ebd8689 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:19:09.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9331" for this suite.
Dec 25 14:19:15.427: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:19:15.512: INFO: namespace projected-9331 deletion completed in 6.158194315s

• [SLOW TEST:16.497 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:19:15.513: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name projected-secret-test-661ceb6a-484a-4216-8da7-6f07594d93d6
STEP: Creating a pod to test consume secrets
Dec 25 14:19:15.593: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-45cca8e9-2a0e-4ad2-b602-62260968399b" in namespace "projected-7149" to be "success or failure"
Dec 25 14:19:15.608: INFO: Pod "pod-projected-secrets-45cca8e9-2a0e-4ad2-b602-62260968399b": Phase="Pending", Reason="", readiness=false. Elapsed: 14.724058ms
Dec 25 14:19:17.633: INFO: Pod "pod-projected-secrets-45cca8e9-2a0e-4ad2-b602-62260968399b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04015577s
Dec 25 14:19:19.647: INFO: Pod "pod-projected-secrets-45cca8e9-2a0e-4ad2-b602-62260968399b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05436361s
Dec 25 14:19:21.661: INFO: Pod "pod-projected-secrets-45cca8e9-2a0e-4ad2-b602-62260968399b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068303085s
Dec 25 14:19:23.671: INFO: Pod "pod-projected-secrets-45cca8e9-2a0e-4ad2-b602-62260968399b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.078088139s
Dec 25 14:19:25.681: INFO: Pod "pod-projected-secrets-45cca8e9-2a0e-4ad2-b602-62260968399b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.087434345s
STEP: Saw pod success
Dec 25 14:19:25.681: INFO: Pod "pod-projected-secrets-45cca8e9-2a0e-4ad2-b602-62260968399b" satisfied condition "success or failure"
Dec 25 14:19:25.686: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-45cca8e9-2a0e-4ad2-b602-62260968399b container secret-volume-test: 
STEP: delete the pod
Dec 25 14:19:25.783: INFO: Waiting for pod pod-projected-secrets-45cca8e9-2a0e-4ad2-b602-62260968399b to disappear
Dec 25 14:19:25.790: INFO: Pod pod-projected-secrets-45cca8e9-2a0e-4ad2-b602-62260968399b no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:19:25.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7149" for this suite.
Dec 25 14:19:31.907: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:19:32.065: INFO: namespace projected-7149 deletion completed in 6.267038295s

• [SLOW TEST:16.552 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:19:32.066: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Dec 25 14:19:40.277: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:19:40.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-9913" for this suite.
Dec 25 14:19:46.383: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:19:46.536: INFO: namespace container-runtime-9913 deletion completed in 6.211661977s

• [SLOW TEST:14.471 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:19:46.537: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Dec 25 14:19:46.684: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:19:59.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-2260" for this suite.
Dec 25 14:20:05.956: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:20:06.068: INFO: namespace init-container-2260 deletion completed in 6.150963893s

• [SLOW TEST:19.531 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:20:06.069: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2515.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2515.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2515.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2515.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2515.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-2515.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2515.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-2515.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2515.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-2515.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2515.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-2515.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2515.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 122.85.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.85.122_udp@PTR;check="$$(dig +tcp +noall +answer +search 122.85.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.85.122_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2515.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2515.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2515.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2515.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2515.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-2515.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2515.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-2515.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2515.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-2515.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2515.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-2515.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2515.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 122.85.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.85.122_udp@PTR;check="$$(dig +tcp +noall +answer +search 122.85.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.85.122_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 25 14:20:18.387: INFO: Unable to read wheezy_udp@dns-test-service.dns-2515.svc.cluster.local from pod dns-2515/dns-test-b017c524-9be5-4570-a40e-6623aa9183fc: the server could not find the requested resource (get pods dns-test-b017c524-9be5-4570-a40e-6623aa9183fc)
Dec 25 14:20:18.398: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2515.svc.cluster.local from pod dns-2515/dns-test-b017c524-9be5-4570-a40e-6623aa9183fc: the server could not find the requested resource (get pods dns-test-b017c524-9be5-4570-a40e-6623aa9183fc)
Dec 25 14:20:18.407: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2515.svc.cluster.local from pod dns-2515/dns-test-b017c524-9be5-4570-a40e-6623aa9183fc: the server could not find the requested resource (get pods dns-test-b017c524-9be5-4570-a40e-6623aa9183fc)
Dec 25 14:20:18.418: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2515.svc.cluster.local from pod dns-2515/dns-test-b017c524-9be5-4570-a40e-6623aa9183fc: the server could not find the requested resource (get pods dns-test-b017c524-9be5-4570-a40e-6623aa9183fc)
Dec 25 14:20:18.424: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-2515.svc.cluster.local from pod dns-2515/dns-test-b017c524-9be5-4570-a40e-6623aa9183fc: the server could not find the requested resource (get pods dns-test-b017c524-9be5-4570-a40e-6623aa9183fc)
Dec 25 14:20:18.428: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-2515.svc.cluster.local from pod dns-2515/dns-test-b017c524-9be5-4570-a40e-6623aa9183fc: the server could not find the requested resource (get pods dns-test-b017c524-9be5-4570-a40e-6623aa9183fc)
Dec 25 14:20:18.431: INFO: Unable to read wheezy_udp@PodARecord from pod dns-2515/dns-test-b017c524-9be5-4570-a40e-6623aa9183fc: the server could not find the requested resource (get pods dns-test-b017c524-9be5-4570-a40e-6623aa9183fc)
Dec 25 14:20:18.434: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-2515/dns-test-b017c524-9be5-4570-a40e-6623aa9183fc: the server could not find the requested resource (get pods dns-test-b017c524-9be5-4570-a40e-6623aa9183fc)
Dec 25 14:20:18.437: INFO: Unable to read 10.106.85.122_udp@PTR from pod dns-2515/dns-test-b017c524-9be5-4570-a40e-6623aa9183fc: the server could not find the requested resource (get pods dns-test-b017c524-9be5-4570-a40e-6623aa9183fc)
Dec 25 14:20:18.441: INFO: Unable to read 10.106.85.122_tcp@PTR from pod dns-2515/dns-test-b017c524-9be5-4570-a40e-6623aa9183fc: the server could not find the requested resource (get pods dns-test-b017c524-9be5-4570-a40e-6623aa9183fc)
Dec 25 14:20:18.445: INFO: Unable to read jessie_udp@dns-test-service.dns-2515.svc.cluster.local from pod dns-2515/dns-test-b017c524-9be5-4570-a40e-6623aa9183fc: the server could not find the requested resource (get pods dns-test-b017c524-9be5-4570-a40e-6623aa9183fc)
Dec 25 14:20:18.448: INFO: Unable to read jessie_tcp@dns-test-service.dns-2515.svc.cluster.local from pod dns-2515/dns-test-b017c524-9be5-4570-a40e-6623aa9183fc: the server could not find the requested resource (get pods dns-test-b017c524-9be5-4570-a40e-6623aa9183fc)
Dec 25 14:20:18.454: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2515.svc.cluster.local from pod dns-2515/dns-test-b017c524-9be5-4570-a40e-6623aa9183fc: the server could not find the requested resource (get pods dns-test-b017c524-9be5-4570-a40e-6623aa9183fc)
Dec 25 14:20:18.459: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2515.svc.cluster.local from pod dns-2515/dns-test-b017c524-9be5-4570-a40e-6623aa9183fc: the server could not find the requested resource (get pods dns-test-b017c524-9be5-4570-a40e-6623aa9183fc)
Dec 25 14:20:18.493: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-2515.svc.cluster.local from pod dns-2515/dns-test-b017c524-9be5-4570-a40e-6623aa9183fc: the server could not find the requested resource (get pods dns-test-b017c524-9be5-4570-a40e-6623aa9183fc)
Dec 25 14:20:18.515: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-2515.svc.cluster.local from pod dns-2515/dns-test-b017c524-9be5-4570-a40e-6623aa9183fc: the server could not find the requested resource (get pods dns-test-b017c524-9be5-4570-a40e-6623aa9183fc)
Dec 25 14:20:18.522: INFO: Unable to read jessie_udp@PodARecord from pod dns-2515/dns-test-b017c524-9be5-4570-a40e-6623aa9183fc: the server could not find the requested resource (get pods dns-test-b017c524-9be5-4570-a40e-6623aa9183fc)
Dec 25 14:20:18.532: INFO: Unable to read jessie_tcp@PodARecord from pod dns-2515/dns-test-b017c524-9be5-4570-a40e-6623aa9183fc: the server could not find the requested resource (get pods dns-test-b017c524-9be5-4570-a40e-6623aa9183fc)
Dec 25 14:20:18.548: INFO: Unable to read 10.106.85.122_udp@PTR from pod dns-2515/dns-test-b017c524-9be5-4570-a40e-6623aa9183fc: the server could not find the requested resource (get pods dns-test-b017c524-9be5-4570-a40e-6623aa9183fc)
Dec 25 14:20:18.557: INFO: Unable to read 10.106.85.122_tcp@PTR from pod dns-2515/dns-test-b017c524-9be5-4570-a40e-6623aa9183fc: the server could not find the requested resource (get pods dns-test-b017c524-9be5-4570-a40e-6623aa9183fc)
Dec 25 14:20:18.557: INFO: Lookups using dns-2515/dns-test-b017c524-9be5-4570-a40e-6623aa9183fc failed for: [wheezy_udp@dns-test-service.dns-2515.svc.cluster.local wheezy_tcp@dns-test-service.dns-2515.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2515.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2515.svc.cluster.local wheezy_udp@_http._tcp.test-service-2.dns-2515.svc.cluster.local wheezy_tcp@_http._tcp.test-service-2.dns-2515.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.106.85.122_udp@PTR 10.106.85.122_tcp@PTR jessie_udp@dns-test-service.dns-2515.svc.cluster.local jessie_tcp@dns-test-service.dns-2515.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2515.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2515.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-2515.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-2515.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord 10.106.85.122_udp@PTR 10.106.85.122_tcp@PTR]

Dec 25 14:20:23.707: INFO: DNS probes using dns-2515/dns-test-b017c524-9be5-4570-a40e-6623aa9183fc succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:20:24.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-2515" for this suite.
Dec 25 14:20:30.316: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:20:30.439: INFO: namespace dns-2515 deletion completed in 6.201898719s

• [SLOW TEST:24.371 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:20:30.440: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-79ef4550-15d3-4842-b26b-7acf3e53bf9d
STEP: Creating a pod to test consume configMaps
Dec 25 14:20:30.560: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-fcb8979a-dd49-4a6f-ab6a-1586f5fac569" in namespace "projected-8001" to be "success or failure"
Dec 25 14:20:30.571: INFO: Pod "pod-projected-configmaps-fcb8979a-dd49-4a6f-ab6a-1586f5fac569": Phase="Pending", Reason="", readiness=false. Elapsed: 10.707407ms
Dec 25 14:20:32.592: INFO: Pod "pod-projected-configmaps-fcb8979a-dd49-4a6f-ab6a-1586f5fac569": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031920659s
Dec 25 14:20:34.605: INFO: Pod "pod-projected-configmaps-fcb8979a-dd49-4a6f-ab6a-1586f5fac569": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044537695s
Dec 25 14:20:36.641: INFO: Pod "pod-projected-configmaps-fcb8979a-dd49-4a6f-ab6a-1586f5fac569": Phase="Pending", Reason="", readiness=false. Elapsed: 6.081246245s
Dec 25 14:20:38.651: INFO: Pod "pod-projected-configmaps-fcb8979a-dd49-4a6f-ab6a-1586f5fac569": Phase="Pending", Reason="", readiness=false. Elapsed: 8.090684924s
Dec 25 14:20:40.661: INFO: Pod "pod-projected-configmaps-fcb8979a-dd49-4a6f-ab6a-1586f5fac569": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.100677558s
STEP: Saw pod success
Dec 25 14:20:40.661: INFO: Pod "pod-projected-configmaps-fcb8979a-dd49-4a6f-ab6a-1586f5fac569" satisfied condition "success or failure"
Dec 25 14:20:40.667: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-fcb8979a-dd49-4a6f-ab6a-1586f5fac569 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 25 14:20:40.939: INFO: Waiting for pod pod-projected-configmaps-fcb8979a-dd49-4a6f-ab6a-1586f5fac569 to disappear
Dec 25 14:20:40.951: INFO: Pod pod-projected-configmaps-fcb8979a-dd49-4a6f-ab6a-1586f5fac569 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:20:40.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8001" for this suite.
Dec 25 14:20:47.135: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:20:47.261: INFO: namespace projected-8001 deletion completed in 6.303889023s

• [SLOW TEST:16.821 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:20:47.261: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:20:47.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4524" for this suite.
Dec 25 14:21:09.438: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:21:09.567: INFO: namespace pods-4524 deletion completed in 22.166153616s

• [SLOW TEST:22.306 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:21:09.567: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-b84b50d6-be13-4515-99e2-5bb21e72fea3
STEP: Creating a pod to test consume secrets
Dec 25 14:21:09.690: INFO: Waiting up to 5m0s for pod "pod-secrets-8215ca2b-5ace-49e3-a97d-b4451ca83877" in namespace "secrets-3692" to be "success or failure"
Dec 25 14:21:09.749: INFO: Pod "pod-secrets-8215ca2b-5ace-49e3-a97d-b4451ca83877": Phase="Pending", Reason="", readiness=false. Elapsed: 58.599883ms
Dec 25 14:21:11.755: INFO: Pod "pod-secrets-8215ca2b-5ace-49e3-a97d-b4451ca83877": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064599467s
Dec 25 14:21:13.765: INFO: Pod "pod-secrets-8215ca2b-5ace-49e3-a97d-b4451ca83877": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074599207s
Dec 25 14:21:15.791: INFO: Pod "pod-secrets-8215ca2b-5ace-49e3-a97d-b4451ca83877": Phase="Pending", Reason="", readiness=false. Elapsed: 6.10077999s
Dec 25 14:21:17.801: INFO: Pod "pod-secrets-8215ca2b-5ace-49e3-a97d-b4451ca83877": Phase="Pending", Reason="", readiness=false. Elapsed: 8.110038204s
Dec 25 14:21:19.809: INFO: Pod "pod-secrets-8215ca2b-5ace-49e3-a97d-b4451ca83877": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.11852752s
STEP: Saw pod success
Dec 25 14:21:19.809: INFO: Pod "pod-secrets-8215ca2b-5ace-49e3-a97d-b4451ca83877" satisfied condition "success or failure"
Dec 25 14:21:19.813: INFO: Trying to get logs from node iruya-node pod pod-secrets-8215ca2b-5ace-49e3-a97d-b4451ca83877 container secret-volume-test: 
STEP: delete the pod
Dec 25 14:21:19.987: INFO: Waiting for pod pod-secrets-8215ca2b-5ace-49e3-a97d-b4451ca83877 to disappear
Dec 25 14:21:20.008: INFO: Pod pod-secrets-8215ca2b-5ace-49e3-a97d-b4451ca83877 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:21:20.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3692" for this suite.
Dec 25 14:21:26.113: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:21:26.724: INFO: namespace secrets-3692 deletion completed in 6.707897291s

• [SLOW TEST:17.157 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:21:26.725: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 25 14:21:26.833: INFO: Waiting up to 5m0s for pod "downwardapi-volume-eb7bbf60-5611-4c24-8067-d6fc7c42de5c" in namespace "downward-api-6746" to be "success or failure"
Dec 25 14:21:26.839: INFO: Pod "downwardapi-volume-eb7bbf60-5611-4c24-8067-d6fc7c42de5c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.022889ms
Dec 25 14:21:28.857: INFO: Pod "downwardapi-volume-eb7bbf60-5611-4c24-8067-d6fc7c42de5c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023346831s
Dec 25 14:21:30.871: INFO: Pod "downwardapi-volume-eb7bbf60-5611-4c24-8067-d6fc7c42de5c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037347264s
Dec 25 14:21:32.884: INFO: Pod "downwardapi-volume-eb7bbf60-5611-4c24-8067-d6fc7c42de5c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050433449s
Dec 25 14:21:34.946: INFO: Pod "downwardapi-volume-eb7bbf60-5611-4c24-8067-d6fc7c42de5c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.112912818s
Dec 25 14:21:37.019: INFO: Pod "downwardapi-volume-eb7bbf60-5611-4c24-8067-d6fc7c42de5c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.186233913s
STEP: Saw pod success
Dec 25 14:21:37.020: INFO: Pod "downwardapi-volume-eb7bbf60-5611-4c24-8067-d6fc7c42de5c" satisfied condition "success or failure"
Dec 25 14:21:37.034: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-eb7bbf60-5611-4c24-8067-d6fc7c42de5c container client-container: 
STEP: delete the pod
Dec 25 14:21:37.151: INFO: Waiting for pod downwardapi-volume-eb7bbf60-5611-4c24-8067-d6fc7c42de5c to disappear
Dec 25 14:21:37.201: INFO: Pod downwardapi-volume-eb7bbf60-5611-4c24-8067-d6fc7c42de5c no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:21:37.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6746" for this suite.
Dec 25 14:21:43.261: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:21:43.410: INFO: namespace downward-api-6746 deletion completed in 6.199988908s

• [SLOW TEST:16.685 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:21:43.411: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-1339
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-1339
STEP: Creating statefulset with conflicting port in namespace statefulset-1339
STEP: Waiting until pod test-pod will start running in namespace statefulset-1339
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-1339
Dec 25 14:21:53.651: INFO: Observed stateful pod in namespace: statefulset-1339, name: ss-0, uid: cf54a684-5b03-48fc-9cee-6da3fc8e94fa, status phase: Pending. Waiting for statefulset controller to delete.
Dec 25 14:21:56.492: INFO: Observed stateful pod in namespace: statefulset-1339, name: ss-0, uid: cf54a684-5b03-48fc-9cee-6da3fc8e94fa, status phase: Failed. Waiting for statefulset controller to delete.
Dec 25 14:21:56.511: INFO: Observed stateful pod in namespace: statefulset-1339, name: ss-0, uid: cf54a684-5b03-48fc-9cee-6da3fc8e94fa, status phase: Failed. Waiting for statefulset controller to delete.
Dec 25 14:21:56.559: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-1339
STEP: Removing pod with conflicting port in namespace statefulset-1339
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-1339 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Dec 25 14:22:16.895: INFO: Deleting all statefulset in ns statefulset-1339
Dec 25 14:22:16.900: INFO: Scaling statefulset ss to 0
Dec 25 14:22:26.940: INFO: Waiting for statefulset status.replicas updated to 0
Dec 25 14:22:26.948: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:22:26.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-1339" for this suite.
Dec 25 14:22:33.015: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:22:33.101: INFO: namespace statefulset-1339 deletion completed in 6.102179372s

• [SLOW TEST:49.690 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:22:33.101: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:22:43.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-1597" for this suite.
Dec 25 14:22:49.407: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:22:49.550: INFO: namespace emptydir-wrapper-1597 deletion completed in 6.168144503s

• [SLOW TEST:16.449 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:22:49.551: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Dec 25 14:22:49.649: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Registering the sample API server.
Dec 25 14:22:50.329: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Dec 25 14:22:52.538: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712880570, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712880570, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712880570, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712880570, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 25 14:22:54.554: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712880570, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712880570, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712880570, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712880570, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 25 14:22:56.555: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712880570, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712880570, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712880570, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712880570, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 25 14:22:58.553: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712880570, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712880570, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712880570, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712880570, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 25 14:23:00.923: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712880570, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712880570, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712880570, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712880570, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 25 14:23:07.393: INFO: Waited 4.829772448s for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:23:08.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-7760" for this suite.
Dec 25 14:23:14.725: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:23:14.873: INFO: namespace aggregator-7760 deletion completed in 6.210745405s

• [SLOW TEST:25.323 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:23:14.875: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-b3417f87-1f5d-48c9-9fef-89fc99262ea5
STEP: Creating a pod to test consume secrets
Dec 25 14:23:15.040: INFO: Waiting up to 5m0s for pod "pod-secrets-59ec2f36-bef9-4900-9b91-742699f80f99" in namespace "secrets-5501" to be "success or failure"
Dec 25 14:23:15.046: INFO: Pod "pod-secrets-59ec2f36-bef9-4900-9b91-742699f80f99": Phase="Pending", Reason="", readiness=false. Elapsed: 5.980754ms
Dec 25 14:23:17.055: INFO: Pod "pod-secrets-59ec2f36-bef9-4900-9b91-742699f80f99": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014652411s
Dec 25 14:23:19.074: INFO: Pod "pod-secrets-59ec2f36-bef9-4900-9b91-742699f80f99": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033849285s
Dec 25 14:23:21.082: INFO: Pod "pod-secrets-59ec2f36-bef9-4900-9b91-742699f80f99": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042326618s
Dec 25 14:23:23.089: INFO: Pod "pod-secrets-59ec2f36-bef9-4900-9b91-742699f80f99": Phase="Pending", Reason="", readiness=false. Elapsed: 8.049287991s
Dec 25 14:23:25.098: INFO: Pod "pod-secrets-59ec2f36-bef9-4900-9b91-742699f80f99": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.057769723s
STEP: Saw pod success
Dec 25 14:23:25.098: INFO: Pod "pod-secrets-59ec2f36-bef9-4900-9b91-742699f80f99" satisfied condition "success or failure"
Dec 25 14:23:25.101: INFO: Trying to get logs from node iruya-node pod pod-secrets-59ec2f36-bef9-4900-9b91-742699f80f99 container secret-volume-test: 
STEP: delete the pod
Dec 25 14:23:25.241: INFO: Waiting for pod pod-secrets-59ec2f36-bef9-4900-9b91-742699f80f99 to disappear
Dec 25 14:23:25.268: INFO: Pod pod-secrets-59ec2f36-bef9-4900-9b91-742699f80f99 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:23:25.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5501" for this suite.
Dec 25 14:23:31.290: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:23:31.475: INFO: namespace secrets-5501 deletion completed in 6.20217212s

• [SLOW TEST:16.600 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:23:31.475: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-8ecfc919-519b-41ca-8a2a-b0ba00b69b3c
STEP: Creating configMap with name cm-test-opt-upd-5f3899aa-86b7-4c69-bb6d-4db6d00d8ec4
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-8ecfc919-519b-41ca-8a2a-b0ba00b69b3c
STEP: Updating configmap cm-test-opt-upd-5f3899aa-86b7-4c69-bb6d-4db6d00d8ec4
STEP: Creating configMap with name cm-test-opt-create-4c9ff2bb-507b-488a-868c-ff2456da4537
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:23:50.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1062" for this suite.
Dec 25 14:24:12.123: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:24:12.266: INFO: namespace configmap-1062 deletion completed in 22.166377209s

• [SLOW TEST:40.791 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:24:12.267: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Dec 25 14:24:12.333: INFO: Waiting up to 5m0s for pod "pod-5897babc-d36b-4ce6-a765-7ecc57afa68c" in namespace "emptydir-5523" to be "success or failure"
Dec 25 14:24:12.336: INFO: Pod "pod-5897babc-d36b-4ce6-a765-7ecc57afa68c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.086889ms
Dec 25 14:24:14.344: INFO: Pod "pod-5897babc-d36b-4ce6-a765-7ecc57afa68c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010923349s
Dec 25 14:24:16.359: INFO: Pod "pod-5897babc-d36b-4ce6-a765-7ecc57afa68c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026397581s
Dec 25 14:24:18.370: INFO: Pod "pod-5897babc-d36b-4ce6-a765-7ecc57afa68c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037301568s
Dec 25 14:24:20.385: INFO: Pod "pod-5897babc-d36b-4ce6-a765-7ecc57afa68c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.051572032s
STEP: Saw pod success
Dec 25 14:24:20.385: INFO: Pod "pod-5897babc-d36b-4ce6-a765-7ecc57afa68c" satisfied condition "success or failure"
Dec 25 14:24:20.391: INFO: Trying to get logs from node iruya-node pod pod-5897babc-d36b-4ce6-a765-7ecc57afa68c container test-container: 
STEP: delete the pod
Dec 25 14:24:20.454: INFO: Waiting for pod pod-5897babc-d36b-4ce6-a765-7ecc57afa68c to disappear
Dec 25 14:24:20.460: INFO: Pod pod-5897babc-d36b-4ce6-a765-7ecc57afa68c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:24:20.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5523" for this suite.
Dec 25 14:24:26.489: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:24:26.602: INFO: namespace emptydir-5523 deletion completed in 6.136500753s

• [SLOW TEST:14.335 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:24:26.602: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W1225 14:24:39.008608       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 25 14:24:39.008: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:24:39.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3122" for this suite.
Dec 25 14:24:52.020: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:24:52.122: INFO: namespace gc-3122 deletion completed in 12.381577783s

• [SLOW TEST:25.520 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:24:52.123: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 25 14:24:54.806: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Dec 25 14:24:59.824: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Dec 25 14:25:05.838: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Dec 25 14:25:05.880: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-3173,SelfLink:/apis/apps/v1/namespaces/deployment-3173/deployments/test-cleanup-deployment,UID:48264fa8-74a1-4c80-8fb3-958816837ef9,ResourceVersion:18025768,Generation:1,CreationTimestamp:2019-12-25 14:25:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Dec 25 14:25:05.912: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-3173,SelfLink:/apis/apps/v1/namespaces/deployment-3173/replicasets/test-cleanup-deployment-55bbcbc84c,UID:a0c727bb-61a3-420e-b1b5-778f2a887d28,ResourceVersion:18025770,Generation:1,CreationTimestamp:2019-12-25 14:25:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 48264fa8-74a1-4c80-8fb3-958816837ef9 0xc002ce2e77 0xc002ce2e78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 25 14:25:05.912: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Dec 25 14:25:05.913: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-3173,SelfLink:/apis/apps/v1/namespaces/deployment-3173/replicasets/test-cleanup-controller,UID:aae20f42-baf3-449f-b59c-2830e459bef9,ResourceVersion:18025769,Generation:1,CreationTimestamp:2019-12-25 14:24:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 48264fa8-74a1-4c80-8fb3-958816837ef9 0xc002ce2c67 0xc002ce2c68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Dec 25 14:25:06.768: INFO: Pod "test-cleanup-controller-cqgvs" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-cqgvs,GenerateName:test-cleanup-controller-,Namespace:deployment-3173,SelfLink:/api/v1/namespaces/deployment-3173/pods/test-cleanup-controller-cqgvs,UID:48898fab-0588-44e9-891a-b324d153653f,ResourceVersion:18025766,Generation:0,CreationTimestamp:2019-12-25 14:24:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller aae20f42-baf3-449f-b59c-2830e459bef9 0xc002ce3cef 0xc002ce3d00}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jgcxj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jgcxj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-jgcxj true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002ce3d70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002ce3d90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:24:55 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:25:04 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:25:04 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:24:54 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2019-12-25 14:24:55 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-25 14:25:04 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://401b87b2ec56d0d8cf93091c72a0f78e914a141e4a4e19d366e1920925742e47}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 25 14:25:06.768: INFO: Pod "test-cleanup-deployment-55bbcbc84c-6g6f8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-6g6f8,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-3173,SelfLink:/api/v1/namespaces/deployment-3173/pods/test-cleanup-deployment-55bbcbc84c-6g6f8,UID:43945a9e-399f-42a5-9cb2-820273d45103,ResourceVersion:18025776,Generation:0,CreationTimestamp:2019-12-25 14:25:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c a0c727bb-61a3-420e-b1b5-778f2a887d28 0xc002ce3e77 0xc002ce3e78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jgcxj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jgcxj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-jgcxj true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002ce3ef0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002ce3f10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:25:05 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:25:06.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-3173" for this suite.
Dec 25 14:25:12.927: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:25:13.091: INFO: namespace deployment-3173 deletion completed in 6.25984016s

• [SLOW TEST:20.968 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:25:13.092: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 25 14:25:13.290: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:25:25.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3571" for this suite.
Dec 25 14:26:09.451: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:26:09.582: INFO: namespace pods-3571 deletion completed in 44.159341501s

• [SLOW TEST:56.490 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:26:09.583: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override arguments
Dec 25 14:26:09.736: INFO: Waiting up to 5m0s for pod "client-containers-07b13416-ed32-473f-9df4-34fec8e40246" in namespace "containers-674" to be "success or failure"
Dec 25 14:26:09.790: INFO: Pod "client-containers-07b13416-ed32-473f-9df4-34fec8e40246": Phase="Pending", Reason="", readiness=false. Elapsed: 53.378829ms
Dec 25 14:26:11.820: INFO: Pod "client-containers-07b13416-ed32-473f-9df4-34fec8e40246": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083480394s
Dec 25 14:26:13.888: INFO: Pod "client-containers-07b13416-ed32-473f-9df4-34fec8e40246": Phase="Pending", Reason="", readiness=false. Elapsed: 4.152279064s
Dec 25 14:26:15.900: INFO: Pod "client-containers-07b13416-ed32-473f-9df4-34fec8e40246": Phase="Pending", Reason="", readiness=false. Elapsed: 6.164205719s
Dec 25 14:26:17.913: INFO: Pod "client-containers-07b13416-ed32-473f-9df4-34fec8e40246": Phase="Pending", Reason="", readiness=false. Elapsed: 8.177009447s
Dec 25 14:26:19.923: INFO: Pod "client-containers-07b13416-ed32-473f-9df4-34fec8e40246": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.186675338s
STEP: Saw pod success
Dec 25 14:26:19.923: INFO: Pod "client-containers-07b13416-ed32-473f-9df4-34fec8e40246" satisfied condition "success or failure"
Dec 25 14:26:19.926: INFO: Trying to get logs from node iruya-node pod client-containers-07b13416-ed32-473f-9df4-34fec8e40246 container test-container: 
STEP: delete the pod
Dec 25 14:26:20.022: INFO: Waiting for pod client-containers-07b13416-ed32-473f-9df4-34fec8e40246 to disappear
Dec 25 14:26:20.027: INFO: Pod client-containers-07b13416-ed32-473f-9df4-34fec8e40246 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:26:20.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-674" for this suite.
Dec 25 14:26:26.060: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:26:26.167: INFO: namespace containers-674 deletion completed in 6.134744815s

• [SLOW TEST:16.585 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:26:26.168: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 25 14:26:26.244: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:26:27.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-7650" for this suite.
Dec 25 14:26:33.556: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:26:33.700: INFO: namespace custom-resource-definition-7650 deletion completed in 6.170540406s

• [SLOW TEST:7.532 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:26:33.700: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod test-webserver-7be15757-55e2-40c2-9cb9-6e738a39ec75 in namespace container-probe-3812
Dec 25 14:26:44.145: INFO: Started pod test-webserver-7be15757-55e2-40c2-9cb9-6e738a39ec75 in namespace container-probe-3812
STEP: checking the pod's current state and verifying that restartCount is present
Dec 25 14:26:44.149: INFO: Initial restart count of pod test-webserver-7be15757-55e2-40c2-9cb9-6e738a39ec75 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:30:45.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3812" for this suite.
Dec 25 14:30:51.745: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:30:51.886: INFO: namespace container-probe-3812 deletion completed in 6.178619131s

• [SLOW TEST:258.187 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:30:51.888: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-projected-9knv
STEP: Creating a pod to test atomic-volume-subpath
Dec 25 14:30:52.078: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-9knv" in namespace "subpath-3050" to be "success or failure"
Dec 25 14:30:52.087: INFO: Pod "pod-subpath-test-projected-9knv": Phase="Pending", Reason="", readiness=false. Elapsed: 9.567221ms
Dec 25 14:30:54.098: INFO: Pod "pod-subpath-test-projected-9knv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019907265s
Dec 25 14:30:56.104: INFO: Pod "pod-subpath-test-projected-9knv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026530314s
Dec 25 14:30:58.116: INFO: Pod "pod-subpath-test-projected-9knv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03827635s
Dec 25 14:31:00.125: INFO: Pod "pod-subpath-test-projected-9knv": Phase="Pending", Reason="", readiness=false. Elapsed: 8.046829883s
Dec 25 14:31:02.131: INFO: Pod "pod-subpath-test-projected-9knv": Phase="Running", Reason="", readiness=true. Elapsed: 10.053546611s
Dec 25 14:31:04.141: INFO: Pod "pod-subpath-test-projected-9knv": Phase="Running", Reason="", readiness=true. Elapsed: 12.063554154s
Dec 25 14:31:06.406: INFO: Pod "pod-subpath-test-projected-9knv": Phase="Running", Reason="", readiness=true. Elapsed: 14.328547175s
Dec 25 14:31:08.414: INFO: Pod "pod-subpath-test-projected-9knv": Phase="Running", Reason="", readiness=true. Elapsed: 16.335978579s
Dec 25 14:31:10.424: INFO: Pod "pod-subpath-test-projected-9knv": Phase="Running", Reason="", readiness=true. Elapsed: 18.346547738s
Dec 25 14:31:12.435: INFO: Pod "pod-subpath-test-projected-9knv": Phase="Running", Reason="", readiness=true. Elapsed: 20.357221194s
Dec 25 14:31:14.448: INFO: Pod "pod-subpath-test-projected-9knv": Phase="Running", Reason="", readiness=true. Elapsed: 22.369791101s
Dec 25 14:31:16.491: INFO: Pod "pod-subpath-test-projected-9knv": Phase="Running", Reason="", readiness=true. Elapsed: 24.412903332s
Dec 25 14:31:18.516: INFO: Pod "pod-subpath-test-projected-9knv": Phase="Running", Reason="", readiness=true. Elapsed: 26.438035702s
Dec 25 14:31:20.534: INFO: Pod "pod-subpath-test-projected-9knv": Phase="Running", Reason="", readiness=true. Elapsed: 28.456452887s
Dec 25 14:31:22.551: INFO: Pod "pod-subpath-test-projected-9knv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.473465252s
STEP: Saw pod success
Dec 25 14:31:22.552: INFO: Pod "pod-subpath-test-projected-9knv" satisfied condition "success or failure"
Dec 25 14:31:22.558: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-projected-9knv container test-container-subpath-projected-9knv: 
STEP: delete the pod
Dec 25 14:31:22.657: INFO: Waiting for pod pod-subpath-test-projected-9knv to disappear
Dec 25 14:31:22.683: INFO: Pod pod-subpath-test-projected-9knv no longer exists
STEP: Deleting pod pod-subpath-test-projected-9knv
Dec 25 14:31:22.683: INFO: Deleting pod "pod-subpath-test-projected-9knv" in namespace "subpath-3050"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:31:22.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-3050" for this suite.
Dec 25 14:31:28.732: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:31:28.866: INFO: namespace subpath-3050 deletion completed in 6.176854932s

• [SLOW TEST:36.978 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:31:28.867: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:31:28.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-170" for this suite.
Dec 25 14:31:34.977: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:31:35.160: INFO: namespace services-170 deletion completed in 6.204509342s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:6.293 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:31:35.160: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Dec 25 14:31:35.318: INFO: Waiting up to 5m0s for pod "pod-e61d1a2a-79ca-492e-b3b3-92d3ab7e833c" in namespace "emptydir-1174" to be "success or failure"
Dec 25 14:31:35.324: INFO: Pod "pod-e61d1a2a-79ca-492e-b3b3-92d3ab7e833c": Phase="Pending", Reason="", readiness=false. Elapsed: 5.503645ms
Dec 25 14:31:37.339: INFO: Pod "pod-e61d1a2a-79ca-492e-b3b3-92d3ab7e833c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020597471s
Dec 25 14:31:39.364: INFO: Pod "pod-e61d1a2a-79ca-492e-b3b3-92d3ab7e833c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045085593s
Dec 25 14:31:41.371: INFO: Pod "pod-e61d1a2a-79ca-492e-b3b3-92d3ab7e833c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052195155s
Dec 25 14:31:43.405: INFO: Pod "pod-e61d1a2a-79ca-492e-b3b3-92d3ab7e833c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.086340513s
Dec 25 14:31:45.415: INFO: Pod "pod-e61d1a2a-79ca-492e-b3b3-92d3ab7e833c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.096674381s
STEP: Saw pod success
Dec 25 14:31:45.415: INFO: Pod "pod-e61d1a2a-79ca-492e-b3b3-92d3ab7e833c" satisfied condition "success or failure"
Dec 25 14:31:45.420: INFO: Trying to get logs from node iruya-node pod pod-e61d1a2a-79ca-492e-b3b3-92d3ab7e833c container test-container: 
STEP: delete the pod
Dec 25 14:31:45.468: INFO: Waiting for pod pod-e61d1a2a-79ca-492e-b3b3-92d3ab7e833c to disappear
Dec 25 14:31:45.523: INFO: Pod pod-e61d1a2a-79ca-492e-b3b3-92d3ab7e833c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:31:45.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1174" for this suite.
Dec 25 14:31:51.555: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:31:51.686: INFO: namespace emptydir-1174 deletion completed in 6.157845564s

• [SLOW TEST:16.526 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:31:51.687: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-46e15ec0-c4c0-451e-bf71-fb9cf89c0c89
STEP: Creating a pod to test consume configMaps
Dec 25 14:31:51.816: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c48bd031-31d8-4542-91aa-c9d7d788bd21" in namespace "projected-5661" to be "success or failure"
Dec 25 14:31:51.836: INFO: Pod "pod-projected-configmaps-c48bd031-31d8-4542-91aa-c9d7d788bd21": Phase="Pending", Reason="", readiness=false. Elapsed: 19.658772ms
Dec 25 14:31:53.857: INFO: Pod "pod-projected-configmaps-c48bd031-31d8-4542-91aa-c9d7d788bd21": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040421698s
Dec 25 14:31:55.872: INFO: Pod "pod-projected-configmaps-c48bd031-31d8-4542-91aa-c9d7d788bd21": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056336477s
Dec 25 14:31:57.893: INFO: Pod "pod-projected-configmaps-c48bd031-31d8-4542-91aa-c9d7d788bd21": Phase="Pending", Reason="", readiness=false. Elapsed: 6.077197555s
Dec 25 14:31:59.907: INFO: Pod "pod-projected-configmaps-c48bd031-31d8-4542-91aa-c9d7d788bd21": Phase="Pending", Reason="", readiness=false. Elapsed: 8.091214602s
Dec 25 14:32:01.928: INFO: Pod "pod-projected-configmaps-c48bd031-31d8-4542-91aa-c9d7d788bd21": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.111667693s
STEP: Saw pod success
Dec 25 14:32:01.928: INFO: Pod "pod-projected-configmaps-c48bd031-31d8-4542-91aa-c9d7d788bd21" satisfied condition "success or failure"
Dec 25 14:32:01.936: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-c48bd031-31d8-4542-91aa-c9d7d788bd21 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 25 14:32:02.350: INFO: Waiting for pod pod-projected-configmaps-c48bd031-31d8-4542-91aa-c9d7d788bd21 to disappear
Dec 25 14:32:02.359: INFO: Pod pod-projected-configmaps-c48bd031-31d8-4542-91aa-c9d7d788bd21 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:32:02.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5661" for this suite.
Dec 25 14:32:08.421: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:32:08.519: INFO: namespace projected-5661 deletion completed in 6.139103943s

• [SLOW TEST:16.832 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:32:08.519: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-929019dc-3617-4329-a0e8-0daa8396ab64
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:32:20.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7133" for this suite.
Dec 25 14:32:42.750: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:32:42.855: INFO: namespace configmap-7133 deletion completed in 22.123724243s

• [SLOW TEST:34.336 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:32:42.856: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 25 14:32:42.926: INFO: Creating deployment "nginx-deployment"
Dec 25 14:32:42.935: INFO: Waiting for observed generation 1
Dec 25 14:32:45.576: INFO: Waiting for all required pods to come up
Dec 25 14:32:46.162: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Dec 25 14:33:14.199: INFO: Waiting for deployment "nginx-deployment" to complete
Dec 25 14:33:14.207: INFO: Updating deployment "nginx-deployment" with a non-existent image
Dec 25 14:33:14.215: INFO: Updating deployment nginx-deployment
Dec 25 14:33:14.215: INFO: Waiting for observed generation 2
Dec 25 14:33:16.365: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Dec 25 14:33:17.682: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Dec 25 14:33:17.692: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Dec 25 14:33:17.917: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Dec 25 14:33:17.917: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Dec 25 14:33:17.920: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Dec 25 14:33:17.924: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Dec 25 14:33:17.924: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Dec 25 14:33:17.935: INFO: Updating deployment nginx-deployment
Dec 25 14:33:17.935: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Dec 25 14:33:18.934: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Dec 25 14:33:19.482: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Dec 25 14:33:28.520: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-6267,SelfLink:/apis/apps/v1/namespaces/deployment-6267/deployments/nginx-deployment,UID:adca399d-cbf4-440b-baf5-6374a3c73d12,ResourceVersion:18026901,Generation:3,CreationTimestamp:2019-12-25 14:32:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2019-12-25 14:33:18 +0000 UTC 2019-12-25 14:33:18 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2019-12-25 14:33:21 +0000 UTC 2019-12-25 14:32:42 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},}

Dec 25 14:33:29.899: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-6267,SelfLink:/apis/apps/v1/namespaces/deployment-6267/replicasets/nginx-deployment-55fb7cb77f,UID:8e9865cd-d75d-4858-9a70-4318b6f520cb,ResourceVersion:18026889,Generation:3,CreationTimestamp:2019-12-25 14:33:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment adca399d-cbf4-440b-baf5-6374a3c73d12 0xc001f49db7 0xc001f49db8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 25 14:33:29.899: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Dec 25 14:33:29.899: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-6267,SelfLink:/apis/apps/v1/namespaces/deployment-6267/replicasets/nginx-deployment-7b8c6f4498,UID:2bda526e-9cca-402d-9246-33957ca2e7bc,ResourceVersion:18026897,Generation:3,CreationTimestamp:2019-12-25 14:32:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment adca399d-cbf4-440b-baf5-6374a3c73d12 0xc001f49e87 0xc001f49e88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Dec 25 14:33:32.064: INFO: Pod "nginx-deployment-55fb7cb77f-5njmq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-5njmq,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6267,SelfLink:/api/v1/namespaces/deployment-6267/pods/nginx-deployment-55fb7cb77f-5njmq,UID:917b478d-709f-44d9-8dfd-bb37eaf7f780,ResourceVersion:18026795,Generation:0,CreationTimestamp:2019-12-25 14:33:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8e9865cd-d75d-4858-9a70-4318b6f520cb 0xc002d5f3c7 0xc002d5f3c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pwlsf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pwlsf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-pwlsf true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d5f440} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d5f460}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:14 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-25 14:33:14 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 25 14:33:32.064: INFO: Pod "nginx-deployment-55fb7cb77f-5tbsn" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-5tbsn,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6267,SelfLink:/api/v1/namespaces/deployment-6267/pods/nginx-deployment-55fb7cb77f-5tbsn,UID:a3f219cf-f396-4119-b23d-265218cb3257,ResourceVersion:18026869,Generation:0,CreationTimestamp:2019-12-25 14:33:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8e9865cd-d75d-4858-9a70-4318b6f520cb 0xc002d5f537 0xc002d5f538}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pwlsf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pwlsf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-pwlsf true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d5f5b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d5f5d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:19 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 25 14:33:32.064: INFO: Pod "nginx-deployment-55fb7cb77f-6gwlz" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-6gwlz,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6267,SelfLink:/api/v1/namespaces/deployment-6267/pods/nginx-deployment-55fb7cb77f-6gwlz,UID:f15d1c76-aa7f-4b4a-bce0-2387a5af6c3a,ResourceVersion:18026805,Generation:0,CreationTimestamp:2019-12-25 14:33:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8e9865cd-d75d-4858-9a70-4318b6f520cb 0xc002d5f657 0xc002d5f658}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pwlsf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pwlsf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-pwlsf true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d5f6c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d5f6e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:14 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2019-12-25 14:33:14 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 25 14:33:32.065: INFO: Pod "nginx-deployment-55fb7cb77f-8fhlq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-8fhlq,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6267,SelfLink:/api/v1/namespaces/deployment-6267/pods/nginx-deployment-55fb7cb77f-8fhlq,UID:1d10e158-8993-4daf-a495-9eb22287cee8,ResourceVersion:18026878,Generation:0,CreationTimestamp:2019-12-25 14:33:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8e9865cd-d75d-4858-9a70-4318b6f520cb 0xc002d5f7b7 0xc002d5f7b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pwlsf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pwlsf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-pwlsf true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d5f840} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d5f860}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:19 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 25 14:33:32.065: INFO: Pod "nginx-deployment-55fb7cb77f-9pspx" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-9pspx,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6267,SelfLink:/api/v1/namespaces/deployment-6267/pods/nginx-deployment-55fb7cb77f-9pspx,UID:7ed7998c-1972-418d-b172-07e221457121,ResourceVersion:18026840,Generation:0,CreationTimestamp:2019-12-25 14:33:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8e9865cd-d75d-4858-9a70-4318b6f520cb 0xc002d5f8e7 0xc002d5f8e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pwlsf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pwlsf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-pwlsf true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d5f970} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d5f990}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:14 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-25 14:33:18 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 25 14:33:32.065: INFO: Pod "nginx-deployment-55fb7cb77f-db9vw" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-db9vw,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6267,SelfLink:/api/v1/namespaces/deployment-6267/pods/nginx-deployment-55fb7cb77f-db9vw,UID:94f22c20-0bbd-41a7-a01c-5608013e1bff,ResourceVersion:18026876,Generation:0,CreationTimestamp:2019-12-25 14:33:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8e9865cd-d75d-4858-9a70-4318b6f520cb 0xc002d5fa77 0xc002d5fa78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pwlsf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pwlsf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-pwlsf true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d5faf0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d5fb10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:19 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 25 14:33:32.065: INFO: Pod "nginx-deployment-55fb7cb77f-fldkl" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-fldkl,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6267,SelfLink:/api/v1/namespaces/deployment-6267/pods/nginx-deployment-55fb7cb77f-fldkl,UID:b6e6445b-649b-4e99-b687-367ef2dc7854,ResourceVersion:18026894,Generation:0,CreationTimestamp:2019-12-25 14:33:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8e9865cd-d75d-4858-9a70-4318b6f520cb 0xc002d5fb97 0xc002d5fb98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pwlsf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pwlsf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-pwlsf true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d5fc00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d5fc30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:19 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:18 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2019-12-25 14:33:19 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 25 14:33:32.065: INFO: Pod "nginx-deployment-55fb7cb77f-hcmtd" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-hcmtd,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6267,SelfLink:/api/v1/namespaces/deployment-6267/pods/nginx-deployment-55fb7cb77f-hcmtd,UID:dbe38c1a-084c-4578-b438-72eb2aa119ef,ResourceVersion:18026820,Generation:0,CreationTimestamp:2019-12-25 14:33:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8e9865cd-d75d-4858-9a70-4318b6f520cb 0xc002d5fd07 0xc002d5fd08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pwlsf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pwlsf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-pwlsf true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d5fd70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d5fd90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:14 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2019-12-25 14:33:14 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 25 14:33:32.065: INFO: Pod "nginx-deployment-55fb7cb77f-hzzn8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-hzzn8,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6267,SelfLink:/api/v1/namespaces/deployment-6267/pods/nginx-deployment-55fb7cb77f-hzzn8,UID:9c718fd3-843f-440b-803e-939e5a6e9ca1,ResourceVersion:18026875,Generation:0,CreationTimestamp:2019-12-25 14:33:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8e9865cd-d75d-4858-9a70-4318b6f520cb 0xc002d5fe67 0xc002d5fe68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pwlsf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pwlsf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-pwlsf true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d5fee0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d5ff00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:19 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 25 14:33:32.066: INFO: Pod "nginx-deployment-55fb7cb77f-k7ql5" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-k7ql5,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6267,SelfLink:/api/v1/namespaces/deployment-6267/pods/nginx-deployment-55fb7cb77f-k7ql5,UID:a8201178-c985-483b-8a61-ecbf8b7ac1dd,ResourceVersion:18026911,Generation:0,CreationTimestamp:2019-12-25 14:33:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8e9865cd-d75d-4858-9a70-4318b6f520cb 0xc002d5ff87 0xc002d5ff88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pwlsf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pwlsf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-pwlsf true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d5fff0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0028c6040}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:21 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:19 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2019-12-25 14:33:21 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 25 14:33:32.066: INFO: Pod "nginx-deployment-55fb7cb77f-l428n" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-l428n,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6267,SelfLink:/api/v1/namespaces/deployment-6267/pods/nginx-deployment-55fb7cb77f-l428n,UID:57b4a69c-ed25-492d-8586-e78404a2faf9,ResourceVersion:18026888,Generation:0,CreationTimestamp:2019-12-25 14:33:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8e9865cd-d75d-4858-9a70-4318b6f520cb 0xc0028c6537 0xc0028c6538}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pwlsf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pwlsf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-pwlsf true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0028c6850} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0028c6910}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:20 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 25 14:33:32.066: INFO: Pod "nginx-deployment-55fb7cb77f-x7pj2" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-x7pj2,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6267,SelfLink:/api/v1/namespaces/deployment-6267/pods/nginx-deployment-55fb7cb77f-x7pj2,UID:d31ce164-c28c-4ab9-a6ba-edd0fdb910de,ResourceVersion:18026879,Generation:0,CreationTimestamp:2019-12-25 14:33:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8e9865cd-d75d-4858-9a70-4318b6f520cb 0xc0028c6e27 0xc0028c6e28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pwlsf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pwlsf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-pwlsf true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0028c7080} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0028c7250}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:19 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 25 14:33:32.066: INFO: Pod "nginx-deployment-55fb7cb77f-xnwgd" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-xnwgd,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6267,SelfLink:/api/v1/namespaces/deployment-6267/pods/nginx-deployment-55fb7cb77f-xnwgd,UID:c04ed6a9-a60c-4d49-83a3-bae4da1130e0,ResourceVersion:18026803,Generation:0,CreationTimestamp:2019-12-25 14:33:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8e9865cd-d75d-4858-9a70-4318b6f520cb 0xc0028c79a7 0xc0028c79a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pwlsf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pwlsf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-pwlsf true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0028c7e60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0028c7e80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:14 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-25 14:33:14 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 25 14:33:32.066: INFO: Pod "nginx-deployment-7b8c6f4498-6ftsm" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-6ftsm,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6267,SelfLink:/api/v1/namespaces/deployment-6267/pods/nginx-deployment-7b8c6f4498-6ftsm,UID:2bf0df45-63de-4641-9590-2ff350dd2f5a,ResourceVersion:18026880,Generation:0,CreationTimestamp:2019-12-25 14:33:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2bda526e-9cca-402d-9246-33957ca2e7bc 0xc000388847 0xc000388848}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pwlsf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pwlsf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-pwlsf true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000388f10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000388f50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:19 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 25 14:33:32.066: INFO: Pod "nginx-deployment-7b8c6f4498-b78vp" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-b78vp,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6267,SelfLink:/api/v1/namespaces/deployment-6267/pods/nginx-deployment-7b8c6f4498-b78vp,UID:4f6d10f3-1a5d-4327-8cc0-b545c0ff3721,ResourceVersion:18026737,Generation:0,CreationTimestamp:2019-12-25 14:32:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2bda526e-9cca-402d-9246-33957ca2e7bc 0xc0003890b7 0xc0003890b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pwlsf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pwlsf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-pwlsf true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000389130} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000389170}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:32:43 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:09 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:09 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:32:43 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.4,StartTime:2019-12-25 14:32:43 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-25 14:33:06 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://fd1d1a40a9bb80823620de1f23b88bf205c54fe4b832be7957040744c3c9f914}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 25 14:33:32.066: INFO: Pod "nginx-deployment-7b8c6f4498-btw8b" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-btw8b,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6267,SelfLink:/api/v1/namespaces/deployment-6267/pods/nginx-deployment-7b8c6f4498-btw8b,UID:1f2f9735-587b-406e-9cbb-28c50b7b7b32,ResourceVersion:18026877,Generation:0,CreationTimestamp:2019-12-25 14:33:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2bda526e-9cca-402d-9246-33957ca2e7bc 0xc000389277 0xc000389278}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pwlsf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pwlsf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-pwlsf true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0003892e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000389310}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:19 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 25 14:33:32.066: INFO: Pod "nginx-deployment-7b8c6f4498-cmls9" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-cmls9,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6267,SelfLink:/api/v1/namespaces/deployment-6267/pods/nginx-deployment-7b8c6f4498-cmls9,UID:43b1abcb-fe51-46e8-8ee6-a3b16123475b,ResourceVersion:18026913,Generation:0,CreationTimestamp:2019-12-25 14:33:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2bda526e-9cca-402d-9246-33957ca2e7bc 0xc0003893f7 0xc0003893f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pwlsf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pwlsf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-pwlsf true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000389490} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0003894b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:19 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-25 14:33:24 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 25 14:33:32.067: INFO: Pod "nginx-deployment-7b8c6f4498-djrz7" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-djrz7,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6267,SelfLink:/api/v1/namespaces/deployment-6267/pods/nginx-deployment-7b8c6f4498-djrz7,UID:acd0305a-267d-4c8a-8ee8-005f36188c7a,ResourceVersion:18026902,Generation:0,CreationTimestamp:2019-12-25 14:33:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2bda526e-9cca-402d-9246-33957ca2e7bc 0xc000389597 0xc000389598}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pwlsf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pwlsf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-pwlsf true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000389610} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000389630}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:21 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:19 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2019-12-25 14:33:21 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 25 14:33:32.067: INFO: Pod "nginx-deployment-7b8c6f4498-f27f4" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-f27f4,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6267,SelfLink:/api/v1/namespaces/deployment-6267/pods/nginx-deployment-7b8c6f4498-f27f4,UID:0a23e41b-89fd-43e7-af3a-31611886095a,ResourceVersion:18026881,Generation:0,CreationTimestamp:2019-12-25 14:33:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2bda526e-9cca-402d-9246-33957ca2e7bc 0xc000389757 0xc000389758}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pwlsf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pwlsf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-pwlsf true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000389820} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000389870}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:19 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 25 14:33:32.067: INFO: Pod "nginx-deployment-7b8c6f4498-f5hqs" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-f5hqs,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6267,SelfLink:/api/v1/namespaces/deployment-6267/pods/nginx-deployment-7b8c6f4498-f5hqs,UID:c13adc33-6585-417e-8ff9-3f0d31b720ce,ResourceVersion:18026734,Generation:0,CreationTimestamp:2019-12-25 14:32:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2bda526e-9cca-402d-9246-33957ca2e7bc 0xc0003899c7 0xc0003899c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pwlsf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pwlsf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-pwlsf true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000389c20} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000389c40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:32:43 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:09 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:09 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:32:43 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.6,StartTime:2019-12-25 14:32:43 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-25 14:33:08 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://51cc521d58a6010413ddc4eb50c6ddc24d396819833e21e761069d8bcc687af5}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 25 14:33:32.067: INFO: Pod "nginx-deployment-7b8c6f4498-h4gjr" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-h4gjr,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6267,SelfLink:/api/v1/namespaces/deployment-6267/pods/nginx-deployment-7b8c6f4498-h4gjr,UID:e9c1d10d-eb5e-491e-af22-1b9f6a4e73fb,ResourceVersion:18026760,Generation:0,CreationTimestamp:2019-12-25 14:32:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2bda526e-9cca-402d-9246-33957ca2e7bc 0xc000389e87 0xc000389e88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pwlsf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pwlsf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-pwlsf true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000389ff0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b56010}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:32:43 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:12 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:12 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:32:43 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2019-12-25 14:32:43 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-25 14:33:08 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://721b44db0f66cfb312790e4270d4c6a0cfdef2c6cee89f067386e363bbdb58a0}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 25 14:33:32.067: INFO: Pod "nginx-deployment-7b8c6f4498-l69gx" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-l69gx,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6267,SelfLink:/api/v1/namespaces/deployment-6267/pods/nginx-deployment-7b8c6f4498-l69gx,UID:3c8a9f09-8ab4-4412-8960-f2867e258180,ResourceVersion:18026732,Generation:0,CreationTimestamp:2019-12-25 14:32:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2bda526e-9cca-402d-9246-33957ca2e7bc 0xc001b560f7 0xc001b560f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pwlsf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pwlsf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-pwlsf true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b56160} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b56200}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:32:43 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:09 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:09 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:32:43 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.7,StartTime:2019-12-25 14:32:43 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-25 14:33:08 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://5b9710dd5ee28638d0bbfa92ce147722797e36d95664a1d3ee61b4aacf4fef9a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 25 14:33:32.067: INFO: Pod "nginx-deployment-7b8c6f4498-m55pk" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-m55pk,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6267,SelfLink:/api/v1/namespaces/deployment-6267/pods/nginx-deployment-7b8c6f4498-m55pk,UID:489e5af0-884c-4d63-8d80-d67c1431c1f2,ResourceVersion:18026867,Generation:0,CreationTimestamp:2019-12-25 14:33:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2bda526e-9cca-402d-9246-33957ca2e7bc 0xc001b562d7 0xc001b562d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pwlsf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pwlsf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-pwlsf true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b56340} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b56360}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:19 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 25 14:33:32.067: INFO: Pod "nginx-deployment-7b8c6f4498-nxzv6" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-nxzv6,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6267,SelfLink:/api/v1/namespaces/deployment-6267/pods/nginx-deployment-7b8c6f4498-nxzv6,UID:1fa9cce8-8dc6-4737-9a81-8721b56f71e0,ResourceVersion:18026744,Generation:0,CreationTimestamp:2019-12-25 14:32:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2bda526e-9cca-402d-9246-33957ca2e7bc 0xc001b56457 0xc001b56458}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pwlsf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pwlsf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-pwlsf true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b56500} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b56540}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:32:43 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:09 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:09 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:32:43 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.8,StartTime:2019-12-25 14:32:43 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-25 14:33:08 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://0c7e2fdb009eacdb6f41e9b630a7aa48b255e7a217f18a5cbea14c54b7fbea6c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 25 14:33:32.067: INFO: Pod "nginx-deployment-7b8c6f4498-prgdc" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-prgdc,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6267,SelfLink:/api/v1/namespaces/deployment-6267/pods/nginx-deployment-7b8c6f4498-prgdc,UID:e9538c5d-ac8d-471a-9e32-30a87e671901,ResourceVersion:18026907,Generation:0,CreationTimestamp:2019-12-25 14:33:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2bda526e-9cca-402d-9246-33957ca2e7bc 0xc001b56617 0xc001b56618}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pwlsf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pwlsf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-pwlsf true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b56740} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b56760}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:19 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:18 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-25 14:33:19 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 25 14:33:32.067: INFO: Pod "nginx-deployment-7b8c6f4498-psbgs" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-psbgs,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6267,SelfLink:/api/v1/namespaces/deployment-6267/pods/nginx-deployment-7b8c6f4498-psbgs,UID:3749cc15-8b66-46f7-a020-f8f86a7b7bf1,ResourceVersion:18026871,Generation:0,CreationTimestamp:2019-12-25 14:33:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2bda526e-9cca-402d-9246-33957ca2e7bc 0xc001b56837 0xc001b56838}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pwlsf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pwlsf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-pwlsf true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b568b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b568d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:19 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 25 14:33:32.068: INFO: Pod "nginx-deployment-7b8c6f4498-qhskj" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-qhskj,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6267,SelfLink:/api/v1/namespaces/deployment-6267/pods/nginx-deployment-7b8c6f4498-qhskj,UID:476ef78f-eb26-4c6a-956f-7b3bd535408c,ResourceVersion:18026882,Generation:0,CreationTimestamp:2019-12-25 14:33:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2bda526e-9cca-402d-9246-33957ca2e7bc 0xc001b56957 0xc001b56958}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pwlsf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pwlsf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-pwlsf true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b569c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b569e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:19 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 25 14:33:32.068: INFO: Pod "nginx-deployment-7b8c6f4498-s5ds2" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-s5ds2,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6267,SelfLink:/api/v1/namespaces/deployment-6267/pods/nginx-deployment-7b8c6f4498-s5ds2,UID:7c7713c6-d5da-4fac-8c0c-2f5fef811e0c,ResourceVersion:18026885,Generation:0,CreationTimestamp:2019-12-25 14:33:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2bda526e-9cca-402d-9246-33957ca2e7bc 0xc001b56a67 0xc001b56a68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pwlsf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pwlsf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-pwlsf true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b56ae0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b56b00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:19 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:18 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-25 14:33:19 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 25 14:33:32.068: INFO: Pod "nginx-deployment-7b8c6f4498-t7fx9" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-t7fx9,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6267,SelfLink:/api/v1/namespaces/deployment-6267/pods/nginx-deployment-7b8c6f4498-t7fx9,UID:7f03da70-8fe9-41a4-acde-de3a7c702b29,ResourceVersion:18026741,Generation:0,CreationTimestamp:2019-12-25 14:32:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2bda526e-9cca-402d-9246-33957ca2e7bc 0xc001b56bc7 0xc001b56bc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pwlsf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pwlsf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-pwlsf true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b56c30} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b56c50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:32:43 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:09 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:09 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:32:43 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.5,StartTime:2019-12-25 14:32:43 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-25 14:33:08 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://a1fb7077c64a89c98711f927236023c2f601834e83ffb7cdb9c911494fe13038}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 25 14:33:32.068: INFO: Pod "nginx-deployment-7b8c6f4498-trt8j" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-trt8j,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6267,SelfLink:/api/v1/namespaces/deployment-6267/pods/nginx-deployment-7b8c6f4498-trt8j,UID:29df7a95-fc5d-4b8e-830c-e241352d179f,ResourceVersion:18026766,Generation:0,CreationTimestamp:2019-12-25 14:32:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2bda526e-9cca-402d-9246-33957ca2e7bc 0xc001b56d37 0xc001b56d38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pwlsf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pwlsf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-pwlsf true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b56db0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b56dd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:32:43 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:12 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:12 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:32:43 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.4,StartTime:2019-12-25 14:32:43 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-25 14:33:11 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://b80844d31024eecdc9ef28eaa79607327c830b157d4b3e538492b5cac472a4db}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 25 14:33:32.068: INFO: Pod "nginx-deployment-7b8c6f4498-vl2wt" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-vl2wt,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6267,SelfLink:/api/v1/namespaces/deployment-6267/pods/nginx-deployment-7b8c6f4498-vl2wt,UID:3777cb14-a2c8-4fd8-b5e8-6138afdff260,ResourceVersion:18026755,Generation:0,CreationTimestamp:2019-12-25 14:32:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2bda526e-9cca-402d-9246-33957ca2e7bc 0xc001b56ea7 0xc001b56ea8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pwlsf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pwlsf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-pwlsf true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b56f20} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b56f40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:32:43 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:11 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:11 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:32:43 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.3,StartTime:2019-12-25 14:32:43 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-25 14:33:11 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://3d4059080e46581650fc1197a5a781122c039091bd04e29b41edc62219f0d9c7}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 25 14:33:32.068: INFO: Pod "nginx-deployment-7b8c6f4498-xgd7s" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-xgd7s,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6267,SelfLink:/api/v1/namespaces/deployment-6267/pods/nginx-deployment-7b8c6f4498-xgd7s,UID:8528aff8-2c2a-445b-8572-4dcb55cdd0a6,ResourceVersion:18026864,Generation:0,CreationTimestamp:2019-12-25 14:33:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2bda526e-9cca-402d-9246-33957ca2e7bc 0xc001b57017 0xc001b57018}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pwlsf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pwlsf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-pwlsf true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b57090} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b570b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:19 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 25 14:33:32.068: INFO: Pod "nginx-deployment-7b8c6f4498-zwv4q" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zwv4q,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6267,SelfLink:/api/v1/namespaces/deployment-6267/pods/nginx-deployment-7b8c6f4498-zwv4q,UID:3d7e7e98-ded0-457c-9635-0d31355561cf,ResourceVersion:18026896,Generation:0,CreationTimestamp:2019-12-25 14:33:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2bda526e-9cca-402d-9246-33957ca2e7bc 0xc001b57137 0xc001b57138}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pwlsf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pwlsf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-pwlsf true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b571b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b571d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:19 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:33:18 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-25 14:33:19 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:33:32.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-6267" for this suite.
Dec 25 14:34:51.503: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:34:52.755: INFO: namespace deployment-6267 deletion completed in 1m18.945280595s

• [SLOW TEST:129.899 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:34:52.756: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's command
Dec 25 14:34:54.428: INFO: Waiting up to 5m0s for pod "var-expansion-c9bed66d-318b-4988-a8f3-0855d019434f" in namespace "var-expansion-9011" to be "success or failure"
Dec 25 14:34:54.435: INFO: Pod "var-expansion-c9bed66d-318b-4988-a8f3-0855d019434f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.183693ms
Dec 25 14:34:57.716: INFO: Pod "var-expansion-c9bed66d-318b-4988-a8f3-0855d019434f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.287697737s
Dec 25 14:34:59.937: INFO: Pod "var-expansion-c9bed66d-318b-4988-a8f3-0855d019434f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.508983589s
Dec 25 14:35:02.099: INFO: Pod "var-expansion-c9bed66d-318b-4988-a8f3-0855d019434f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.670719219s
Dec 25 14:35:04.808: INFO: Pod "var-expansion-c9bed66d-318b-4988-a8f3-0855d019434f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.380212514s
Dec 25 14:35:06.822: INFO: Pod "var-expansion-c9bed66d-318b-4988-a8f3-0855d019434f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.394413949s
Dec 25 14:35:08.831: INFO: Pod "var-expansion-c9bed66d-318b-4988-a8f3-0855d019434f": Phase="Pending", Reason="", readiness=false. Elapsed: 14.402737s
Dec 25 14:35:10.838: INFO: Pod "var-expansion-c9bed66d-318b-4988-a8f3-0855d019434f": Phase="Pending", Reason="", readiness=false. Elapsed: 16.410021767s
Dec 25 14:35:12.855: INFO: Pod "var-expansion-c9bed66d-318b-4988-a8f3-0855d019434f": Phase="Pending", Reason="", readiness=false. Elapsed: 18.427089904s
Dec 25 14:35:14.870: INFO: Pod "var-expansion-c9bed66d-318b-4988-a8f3-0855d019434f": Phase="Pending", Reason="", readiness=false. Elapsed: 20.441985295s
Dec 25 14:35:16.879: INFO: Pod "var-expansion-c9bed66d-318b-4988-a8f3-0855d019434f": Phase="Pending", Reason="", readiness=false. Elapsed: 22.450983886s
Dec 25 14:35:18.891: INFO: Pod "var-expansion-c9bed66d-318b-4988-a8f3-0855d019434f": Phase="Pending", Reason="", readiness=false. Elapsed: 24.463167416s
Dec 25 14:35:20.898: INFO: Pod "var-expansion-c9bed66d-318b-4988-a8f3-0855d019434f": Phase="Pending", Reason="", readiness=false. Elapsed: 26.470182885s
Dec 25 14:35:22.906: INFO: Pod "var-expansion-c9bed66d-318b-4988-a8f3-0855d019434f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.478428201s
STEP: Saw pod success
Dec 25 14:35:22.906: INFO: Pod "var-expansion-c9bed66d-318b-4988-a8f3-0855d019434f" satisfied condition "success or failure"
Dec 25 14:35:22.910: INFO: Trying to get logs from node iruya-node pod var-expansion-c9bed66d-318b-4988-a8f3-0855d019434f container dapi-container: 
STEP: delete the pod
Dec 25 14:35:22.978: INFO: Waiting for pod var-expansion-c9bed66d-318b-4988-a8f3-0855d019434f to disappear
Dec 25 14:35:22.992: INFO: Pod var-expansion-c9bed66d-318b-4988-a8f3-0855d019434f no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:35:22.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-9011" for this suite.
Dec 25 14:35:29.048: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:35:29.175: INFO: namespace var-expansion-9011 deletion completed in 6.151545902s

• [SLOW TEST:36.419 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:35:29.175: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Dec 25 14:35:39.839: INFO: Successfully updated pod "annotationupdatef42f0070-6a45-4f74-876e-cc137e5b8512"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:35:41.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3417" for this suite.
Dec 25 14:36:03.967: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:36:04.075: INFO: namespace downward-api-3417 deletion completed in 22.140782491s

• [SLOW TEST:34.900 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:36:04.077: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Dec 25 14:36:13.261: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:36:14.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-1019" for this suite.
Dec 25 14:36:52.357: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:36:52.446: INFO: namespace replicaset-1019 deletion completed in 38.122028594s

• [SLOW TEST:48.370 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:36:52.447: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
STEP: reading a file in the container
Dec 25 14:37:03.106: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3218 pod-service-account-5a0004df-d1fd-4231-95e7-414bb12d711d -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Dec 25 14:37:06.226: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3218 pod-service-account-5a0004df-d1fd-4231-95e7-414bb12d711d -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Dec 25 14:37:06.807: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3218 pod-service-account-5a0004df-d1fd-4231-95e7-414bb12d711d -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:37:07.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-3218" for this suite.
Dec 25 14:37:13.400: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:37:13.512: INFO: namespace svcaccounts-3218 deletion completed in 6.138761473s

• [SLOW TEST:21.065 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:37:13.513: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 25 14:37:13.623: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ff610b6e-72f0-47e4-aecb-5efffffd3f91" in namespace "projected-2186" to be "success or failure"
Dec 25 14:37:13.636: INFO: Pod "downwardapi-volume-ff610b6e-72f0-47e4-aecb-5efffffd3f91": Phase="Pending", Reason="", readiness=false. Elapsed: 12.876025ms
Dec 25 14:37:15.652: INFO: Pod "downwardapi-volume-ff610b6e-72f0-47e4-aecb-5efffffd3f91": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029019257s
Dec 25 14:37:17.670: INFO: Pod "downwardapi-volume-ff610b6e-72f0-47e4-aecb-5efffffd3f91": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046128806s
Dec 25 14:37:19.678: INFO: Pod "downwardapi-volume-ff610b6e-72f0-47e4-aecb-5efffffd3f91": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054183195s
Dec 25 14:37:21.688: INFO: Pod "downwardapi-volume-ff610b6e-72f0-47e4-aecb-5efffffd3f91": Phase="Pending", Reason="", readiness=false. Elapsed: 8.064700838s
Dec 25 14:37:23.705: INFO: Pod "downwardapi-volume-ff610b6e-72f0-47e4-aecb-5efffffd3f91": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.081404655s
STEP: Saw pod success
Dec 25 14:37:23.705: INFO: Pod "downwardapi-volume-ff610b6e-72f0-47e4-aecb-5efffffd3f91" satisfied condition "success or failure"
Dec 25 14:37:23.712: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-ff610b6e-72f0-47e4-aecb-5efffffd3f91 container client-container: 
STEP: delete the pod
Dec 25 14:37:23.989: INFO: Waiting for pod downwardapi-volume-ff610b6e-72f0-47e4-aecb-5efffffd3f91 to disappear
Dec 25 14:37:24.003: INFO: Pod downwardapi-volume-ff610b6e-72f0-47e4-aecb-5efffffd3f91 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:37:24.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2186" for this suite.
Dec 25 14:37:30.095: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:37:30.198: INFO: namespace projected-2186 deletion completed in 6.161720628s

• [SLOW TEST:16.686 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:37:30.199: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Dec 25 14:37:30.301: INFO: Waiting up to 5m0s for pod "downward-api-ad0abfa9-e5e2-4b95-ab18-905ac0788515" in namespace "downward-api-4561" to be "success or failure"
Dec 25 14:37:30.311: INFO: Pod "downward-api-ad0abfa9-e5e2-4b95-ab18-905ac0788515": Phase="Pending", Reason="", readiness=false. Elapsed: 9.543136ms
Dec 25 14:37:32.317: INFO: Pod "downward-api-ad0abfa9-e5e2-4b95-ab18-905ac0788515": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016035845s
Dec 25 14:37:34.325: INFO: Pod "downward-api-ad0abfa9-e5e2-4b95-ab18-905ac0788515": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024281235s
Dec 25 14:37:36.334: INFO: Pod "downward-api-ad0abfa9-e5e2-4b95-ab18-905ac0788515": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032993013s
Dec 25 14:37:38.355: INFO: Pod "downward-api-ad0abfa9-e5e2-4b95-ab18-905ac0788515": Phase="Pending", Reason="", readiness=false. Elapsed: 8.053705129s
Dec 25 14:37:40.376: INFO: Pod "downward-api-ad0abfa9-e5e2-4b95-ab18-905ac0788515": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.074579813s
STEP: Saw pod success
Dec 25 14:37:40.376: INFO: Pod "downward-api-ad0abfa9-e5e2-4b95-ab18-905ac0788515" satisfied condition "success or failure"
Dec 25 14:37:40.381: INFO: Trying to get logs from node iruya-node pod downward-api-ad0abfa9-e5e2-4b95-ab18-905ac0788515 container dapi-container: 
STEP: delete the pod
Dec 25 14:37:40.435: INFO: Waiting for pod downward-api-ad0abfa9-e5e2-4b95-ab18-905ac0788515 to disappear
Dec 25 14:37:40.439: INFO: Pod downward-api-ad0abfa9-e5e2-4b95-ab18-905ac0788515 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:37:40.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4561" for this suite.
Dec 25 14:37:46.509: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:37:46.716: INFO: namespace downward-api-4561 deletion completed in 6.244910267s

• [SLOW TEST:16.517 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:37:46.716: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-410bf434-4995-4b72-a09d-75d779088bbb
STEP: Creating a pod to test consume configMaps
Dec 25 14:37:46.861: INFO: Waiting up to 5m0s for pod "pod-configmaps-83c0d318-438d-4229-a644-c964ebe7b509" in namespace "configmap-1028" to be "success or failure"
Dec 25 14:37:46.880: INFO: Pod "pod-configmaps-83c0d318-438d-4229-a644-c964ebe7b509": Phase="Pending", Reason="", readiness=false. Elapsed: 18.181379ms
Dec 25 14:37:48.896: INFO: Pod "pod-configmaps-83c0d318-438d-4229-a644-c964ebe7b509": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034501097s
Dec 25 14:37:50.905: INFO: Pod "pod-configmaps-83c0d318-438d-4229-a644-c964ebe7b509": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043275892s
Dec 25 14:37:52.913: INFO: Pod "pod-configmaps-83c0d318-438d-4229-a644-c964ebe7b509": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051764305s
Dec 25 14:37:54.923: INFO: Pod "pod-configmaps-83c0d318-438d-4229-a644-c964ebe7b509": Phase="Pending", Reason="", readiness=false. Elapsed: 8.061805854s
Dec 25 14:37:56.933: INFO: Pod "pod-configmaps-83c0d318-438d-4229-a644-c964ebe7b509": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.071058524s
STEP: Saw pod success
Dec 25 14:37:56.933: INFO: Pod "pod-configmaps-83c0d318-438d-4229-a644-c964ebe7b509" satisfied condition "success or failure"
Dec 25 14:37:56.938: INFO: Trying to get logs from node iruya-node pod pod-configmaps-83c0d318-438d-4229-a644-c964ebe7b509 container configmap-volume-test: 
STEP: delete the pod
Dec 25 14:37:57.056: INFO: Waiting for pod pod-configmaps-83c0d318-438d-4229-a644-c964ebe7b509 to disappear
Dec 25 14:37:57.060: INFO: Pod pod-configmaps-83c0d318-438d-4229-a644-c964ebe7b509 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:37:57.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1028" for this suite.
Dec 25 14:38:03.095: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:38:03.188: INFO: namespace configmap-1028 deletion completed in 6.123949551s

• [SLOW TEST:16.472 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:38:03.189: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-3721f5ae-a1b4-4485-9b13-e157222d5e6d
STEP: Creating a pod to test consume configMaps
Dec 25 14:38:03.357: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-289b84a7-4bfe-4f81-b6d2-947b0d8a49d4" in namespace "projected-8046" to be "success or failure"
Dec 25 14:38:03.380: INFO: Pod "pod-projected-configmaps-289b84a7-4bfe-4f81-b6d2-947b0d8a49d4": Phase="Pending", Reason="", readiness=false. Elapsed: 22.561686ms
Dec 25 14:38:05.390: INFO: Pod "pod-projected-configmaps-289b84a7-4bfe-4f81-b6d2-947b0d8a49d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032952089s
Dec 25 14:38:07.406: INFO: Pod "pod-projected-configmaps-289b84a7-4bfe-4f81-b6d2-947b0d8a49d4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048275284s
Dec 25 14:38:09.415: INFO: Pod "pod-projected-configmaps-289b84a7-4bfe-4f81-b6d2-947b0d8a49d4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057505633s
Dec 25 14:38:11.434: INFO: Pod "pod-projected-configmaps-289b84a7-4bfe-4f81-b6d2-947b0d8a49d4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.076272105s
Dec 25 14:38:13.439: INFO: Pod "pod-projected-configmaps-289b84a7-4bfe-4f81-b6d2-947b0d8a49d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.082191493s
STEP: Saw pod success
Dec 25 14:38:13.440: INFO: Pod "pod-projected-configmaps-289b84a7-4bfe-4f81-b6d2-947b0d8a49d4" satisfied condition "success or failure"
Dec 25 14:38:13.444: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-289b84a7-4bfe-4f81-b6d2-947b0d8a49d4 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 25 14:38:13.577: INFO: Waiting for pod pod-projected-configmaps-289b84a7-4bfe-4f81-b6d2-947b0d8a49d4 to disappear
Dec 25 14:38:13.586: INFO: Pod pod-projected-configmaps-289b84a7-4bfe-4f81-b6d2-947b0d8a49d4 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:38:13.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8046" for this suite.
Dec 25 14:38:19.609: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:38:19.744: INFO: namespace projected-8046 deletion completed in 6.152871344s

• [SLOW TEST:16.556 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:38:19.744: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 25 14:38:19.871: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8f5aed2d-52d1-46dc-a04c-6de7aaece571" in namespace "projected-3962" to be "success or failure"
Dec 25 14:38:19.886: INFO: Pod "downwardapi-volume-8f5aed2d-52d1-46dc-a04c-6de7aaece571": Phase="Pending", Reason="", readiness=false. Elapsed: 15.544819ms
Dec 25 14:38:21.907: INFO: Pod "downwardapi-volume-8f5aed2d-52d1-46dc-a04c-6de7aaece571": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036654078s
Dec 25 14:38:23.931: INFO: Pod "downwardapi-volume-8f5aed2d-52d1-46dc-a04c-6de7aaece571": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059783838s
Dec 25 14:38:25.946: INFO: Pod "downwardapi-volume-8f5aed2d-52d1-46dc-a04c-6de7aaece571": Phase="Pending", Reason="", readiness=false. Elapsed: 6.074679911s
Dec 25 14:38:27.951: INFO: Pod "downwardapi-volume-8f5aed2d-52d1-46dc-a04c-6de7aaece571": Phase="Pending", Reason="", readiness=false. Elapsed: 8.080420964s
Dec 25 14:38:29.966: INFO: Pod "downwardapi-volume-8f5aed2d-52d1-46dc-a04c-6de7aaece571": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.095056985s
STEP: Saw pod success
Dec 25 14:38:29.966: INFO: Pod "downwardapi-volume-8f5aed2d-52d1-46dc-a04c-6de7aaece571" satisfied condition "success or failure"
Dec 25 14:38:29.971: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-8f5aed2d-52d1-46dc-a04c-6de7aaece571 container client-container: 
STEP: delete the pod
Dec 25 14:38:30.159: INFO: Waiting for pod downwardapi-volume-8f5aed2d-52d1-46dc-a04c-6de7aaece571 to disappear
Dec 25 14:38:30.169: INFO: Pod downwardapi-volume-8f5aed2d-52d1-46dc-a04c-6de7aaece571 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:38:30.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3962" for this suite.
Dec 25 14:38:36.197: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:38:36.331: INFO: namespace projected-3962 deletion completed in 6.155919475s

• [SLOW TEST:16.587 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:38:36.332: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Dec 25 14:38:36.502: INFO: Waiting up to 5m0s for pod "downward-api-62949c5c-47b3-4f07-a658-e71290506b3f" in namespace "downward-api-84" to be "success or failure"
Dec 25 14:38:36.533: INFO: Pod "downward-api-62949c5c-47b3-4f07-a658-e71290506b3f": Phase="Pending", Reason="", readiness=false. Elapsed: 30.486538ms
Dec 25 14:38:38.549: INFO: Pod "downward-api-62949c5c-47b3-4f07-a658-e71290506b3f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046432414s
Dec 25 14:38:40.562: INFO: Pod "downward-api-62949c5c-47b3-4f07-a658-e71290506b3f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059729658s
Dec 25 14:38:42.603: INFO: Pod "downward-api-62949c5c-47b3-4f07-a658-e71290506b3f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.100977218s
Dec 25 14:38:45.265: INFO: Pod "downward-api-62949c5c-47b3-4f07-a658-e71290506b3f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.762612956s
Dec 25 14:38:47.281: INFO: Pod "downward-api-62949c5c-47b3-4f07-a658-e71290506b3f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.778539076s
STEP: Saw pod success
Dec 25 14:38:47.281: INFO: Pod "downward-api-62949c5c-47b3-4f07-a658-e71290506b3f" satisfied condition "success or failure"
Dec 25 14:38:47.287: INFO: Trying to get logs from node iruya-node pod downward-api-62949c5c-47b3-4f07-a658-e71290506b3f container dapi-container: 
STEP: delete the pod
Dec 25 14:38:47.503: INFO: Waiting for pod downward-api-62949c5c-47b3-4f07-a658-e71290506b3f to disappear
Dec 25 14:38:47.575: INFO: Pod downward-api-62949c5c-47b3-4f07-a658-e71290506b3f no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:38:47.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-84" for this suite.
Dec 25 14:38:53.609: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:38:53.818: INFO: namespace downward-api-84 deletion completed in 6.236101083s

• [SLOW TEST:17.487 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:38:53.821: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's args
Dec 25 14:38:53.964: INFO: Waiting up to 5m0s for pod "var-expansion-4d36a07f-c343-407c-86f1-bbd44ac8b93f" in namespace "var-expansion-6829" to be "success or failure"
Dec 25 14:38:53.980: INFO: Pod "var-expansion-4d36a07f-c343-407c-86f1-bbd44ac8b93f": Phase="Pending", Reason="", readiness=false. Elapsed: 14.978882ms
Dec 25 14:38:55.995: INFO: Pod "var-expansion-4d36a07f-c343-407c-86f1-bbd44ac8b93f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030038073s
Dec 25 14:38:58.005: INFO: Pod "var-expansion-4d36a07f-c343-407c-86f1-bbd44ac8b93f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040379131s
Dec 25 14:39:00.017: INFO: Pod "var-expansion-4d36a07f-c343-407c-86f1-bbd44ac8b93f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051984142s
Dec 25 14:39:02.041: INFO: Pod "var-expansion-4d36a07f-c343-407c-86f1-bbd44ac8b93f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.075989663s
Dec 25 14:39:04.058: INFO: Pod "var-expansion-4d36a07f-c343-407c-86f1-bbd44ac8b93f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.093272091s
STEP: Saw pod success
Dec 25 14:39:04.058: INFO: Pod "var-expansion-4d36a07f-c343-407c-86f1-bbd44ac8b93f" satisfied condition "success or failure"
Dec 25 14:39:04.063: INFO: Trying to get logs from node iruya-node pod var-expansion-4d36a07f-c343-407c-86f1-bbd44ac8b93f container dapi-container: 
STEP: delete the pod
Dec 25 14:39:04.178: INFO: Waiting for pod var-expansion-4d36a07f-c343-407c-86f1-bbd44ac8b93f to disappear
Dec 25 14:39:04.240: INFO: Pod var-expansion-4d36a07f-c343-407c-86f1-bbd44ac8b93f no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:39:04.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-6829" for this suite.
Dec 25 14:39:10.286: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:39:10.396: INFO: namespace var-expansion-6829 deletion completed in 6.134397451s

• [SLOW TEST:16.576 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:39:10.397: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-339f118d-5af6-4a86-896e-b2c7d5b778cf
STEP: Creating a pod to test consume configMaps
Dec 25 14:39:10.517: INFO: Waiting up to 5m0s for pod "pod-configmaps-48d3aee4-f76e-4782-b7b3-78aaab416192" in namespace "configmap-8455" to be "success or failure"
Dec 25 14:39:10.561: INFO: Pod "pod-configmaps-48d3aee4-f76e-4782-b7b3-78aaab416192": Phase="Pending", Reason="", readiness=false. Elapsed: 44.164495ms
Dec 25 14:39:12.577: INFO: Pod "pod-configmaps-48d3aee4-f76e-4782-b7b3-78aaab416192": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059855259s
Dec 25 14:39:14.583: INFO: Pod "pod-configmaps-48d3aee4-f76e-4782-b7b3-78aaab416192": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065966502s
Dec 25 14:39:16.593: INFO: Pod "pod-configmaps-48d3aee4-f76e-4782-b7b3-78aaab416192": Phase="Pending", Reason="", readiness=false. Elapsed: 6.075865994s
Dec 25 14:39:18.611: INFO: Pod "pod-configmaps-48d3aee4-f76e-4782-b7b3-78aaab416192": Phase="Pending", Reason="", readiness=false. Elapsed: 8.09461361s
Dec 25 14:39:20.623: INFO: Pod "pod-configmaps-48d3aee4-f76e-4782-b7b3-78aaab416192": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.105820737s
STEP: Saw pod success
Dec 25 14:39:20.623: INFO: Pod "pod-configmaps-48d3aee4-f76e-4782-b7b3-78aaab416192" satisfied condition "success or failure"
Dec 25 14:39:20.628: INFO: Trying to get logs from node iruya-node pod pod-configmaps-48d3aee4-f76e-4782-b7b3-78aaab416192 container configmap-volume-test: 
STEP: delete the pod
Dec 25 14:39:20.679: INFO: Waiting for pod pod-configmaps-48d3aee4-f76e-4782-b7b3-78aaab416192 to disappear
Dec 25 14:39:20.686: INFO: Pod pod-configmaps-48d3aee4-f76e-4782-b7b3-78aaab416192 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:39:20.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8455" for this suite.
Dec 25 14:39:26.739: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:39:26.947: INFO: namespace configmap-8455 deletion completed in 6.256369417s

• [SLOW TEST:16.550 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:39:26.948: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 25 14:39:27.023: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0149162e-f482-456c-a4c2-f1e84db74119" in namespace "projected-7255" to be "success or failure"
Dec 25 14:39:27.028: INFO: Pod "downwardapi-volume-0149162e-f482-456c-a4c2-f1e84db74119": Phase="Pending", Reason="", readiness=false. Elapsed: 5.042363ms
Dec 25 14:39:29.050: INFO: Pod "downwardapi-volume-0149162e-f482-456c-a4c2-f1e84db74119": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02711786s
Dec 25 14:39:31.062: INFO: Pod "downwardapi-volume-0149162e-f482-456c-a4c2-f1e84db74119": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039174085s
Dec 25 14:39:33.071: INFO: Pod "downwardapi-volume-0149162e-f482-456c-a4c2-f1e84db74119": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047851875s
Dec 25 14:39:35.078: INFO: Pod "downwardapi-volume-0149162e-f482-456c-a4c2-f1e84db74119": Phase="Pending", Reason="", readiness=false. Elapsed: 8.054841232s
Dec 25 14:39:37.085: INFO: Pod "downwardapi-volume-0149162e-f482-456c-a4c2-f1e84db74119": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.062530115s
STEP: Saw pod success
Dec 25 14:39:37.085: INFO: Pod "downwardapi-volume-0149162e-f482-456c-a4c2-f1e84db74119" satisfied condition "success or failure"
Dec 25 14:39:37.091: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-0149162e-f482-456c-a4c2-f1e84db74119 container client-container: 
STEP: delete the pod
Dec 25 14:39:37.218: INFO: Waiting for pod downwardapi-volume-0149162e-f482-456c-a4c2-f1e84db74119 to disappear
Dec 25 14:39:37.226: INFO: Pod downwardapi-volume-0149162e-f482-456c-a4c2-f1e84db74119 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:39:37.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7255" for this suite.
Dec 25 14:39:43.268: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:39:43.413: INFO: namespace projected-7255 deletion completed in 6.178481697s

• [SLOW TEST:16.465 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:39:43.414: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Dec 25 14:39:43.648: INFO: Number of nodes with available pods: 0
Dec 25 14:39:43.648: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:39:45.528: INFO: Number of nodes with available pods: 0
Dec 25 14:39:45.529: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:39:45.667: INFO: Number of nodes with available pods: 0
Dec 25 14:39:45.667: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:39:46.914: INFO: Number of nodes with available pods: 0
Dec 25 14:39:46.914: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:39:47.666: INFO: Number of nodes with available pods: 0
Dec 25 14:39:47.667: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:39:48.669: INFO: Number of nodes with available pods: 0
Dec 25 14:39:48.670: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:39:50.113: INFO: Number of nodes with available pods: 0
Dec 25 14:39:50.113: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:39:51.235: INFO: Number of nodes with available pods: 0
Dec 25 14:39:51.235: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:39:51.703: INFO: Number of nodes with available pods: 0
Dec 25 14:39:51.703: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:39:52.740: INFO: Number of nodes with available pods: 0
Dec 25 14:39:52.740: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:39:53.672: INFO: Number of nodes with available pods: 0
Dec 25 14:39:53.672: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:39:54.667: INFO: Number of nodes with available pods: 2
Dec 25 14:39:54.667: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Dec 25 14:39:54.761: INFO: Number of nodes with available pods: 1
Dec 25 14:39:54.761: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:39:55.778: INFO: Number of nodes with available pods: 1
Dec 25 14:39:55.779: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:39:56.777: INFO: Number of nodes with available pods: 1
Dec 25 14:39:56.777: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:39:57.784: INFO: Number of nodes with available pods: 1
Dec 25 14:39:57.784: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:39:58.798: INFO: Number of nodes with available pods: 1
Dec 25 14:39:58.798: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:39:59.778: INFO: Number of nodes with available pods: 1
Dec 25 14:39:59.778: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:40:00.778: INFO: Number of nodes with available pods: 1
Dec 25 14:40:00.778: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:40:01.788: INFO: Number of nodes with available pods: 1
Dec 25 14:40:01.788: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:40:02.778: INFO: Number of nodes with available pods: 1
Dec 25 14:40:02.778: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:40:03.786: INFO: Number of nodes with available pods: 1
Dec 25 14:40:03.786: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:40:04.777: INFO: Number of nodes with available pods: 1
Dec 25 14:40:04.777: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:40:05.784: INFO: Number of nodes with available pods: 1
Dec 25 14:40:05.785: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:40:06.822: INFO: Number of nodes with available pods: 1
Dec 25 14:40:06.822: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:40:07.788: INFO: Number of nodes with available pods: 1
Dec 25 14:40:07.788: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:40:08.854: INFO: Number of nodes with available pods: 1
Dec 25 14:40:08.854: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:40:09.795: INFO: Number of nodes with available pods: 1
Dec 25 14:40:09.796: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:40:10.782: INFO: Number of nodes with available pods: 1
Dec 25 14:40:10.782: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:40:11.799: INFO: Number of nodes with available pods: 1
Dec 25 14:40:11.800: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:40:12.784: INFO: Number of nodes with available pods: 1
Dec 25 14:40:12.784: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:40:13.800: INFO: Number of nodes with available pods: 1
Dec 25 14:40:13.800: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:40:14.787: INFO: Number of nodes with available pods: 1
Dec 25 14:40:14.787: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:40:15.782: INFO: Number of nodes with available pods: 2
Dec 25 14:40:15.782: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9286, will wait for the garbage collector to delete the pods
Dec 25 14:40:15.874: INFO: Deleting DaemonSet.extensions daemon-set took: 18.071552ms
Dec 25 14:40:16.175: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.806398ms
Dec 25 14:40:27.883: INFO: Number of nodes with available pods: 0
Dec 25 14:40:27.883: INFO: Number of running nodes: 0, number of available pods: 0
Dec 25 14:40:27.888: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9286/daemonsets","resourceVersion":"18028108"},"items":null}

Dec 25 14:40:27.892: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9286/pods","resourceVersion":"18028108"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:40:27.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-9286" for this suite.
Dec 25 14:40:33.946: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:40:34.077: INFO: namespace daemonsets-9286 deletion completed in 6.159430184s

• [SLOW TEST:50.664 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:40:34.079: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Dec 25 14:40:34.156: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 25 14:40:34.249: INFO: Waiting for terminating namespaces to be deleted...
Dec 25 14:40:34.251: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Dec 25 14:40:34.265: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Dec 25 14:40:34.265: INFO: 	Container weave ready: true, restart count 0
Dec 25 14:40:34.265: INFO: 	Container weave-npc ready: true, restart count 0
Dec 25 14:40:34.265: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Dec 25 14:40:34.265: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 25 14:40:34.265: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Dec 25 14:40:34.277: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Dec 25 14:40:34.277: INFO: 	Container kube-controller-manager ready: true, restart count 10
Dec 25 14:40:34.277: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Dec 25 14:40:34.277: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 25 14:40:34.277: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Dec 25 14:40:34.277: INFO: 	Container kube-apiserver ready: true, restart count 0
Dec 25 14:40:34.277: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Dec 25 14:40:34.277: INFO: 	Container kube-scheduler ready: true, restart count 7
Dec 25 14:40:34.277: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Dec 25 14:40:34.277: INFO: 	Container coredns ready: true, restart count 0
Dec 25 14:40:34.277: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Dec 25 14:40:34.277: INFO: 	Container etcd ready: true, restart count 0
Dec 25 14:40:34.277: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Dec 25 14:40:34.277: INFO: 	Container weave ready: true, restart count 0
Dec 25 14:40:34.277: INFO: 	Container weave-npc ready: true, restart count 0
Dec 25 14:40:34.277: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Dec 25 14:40:34.277: INFO: 	Container coredns ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15e3a43332082a3c], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:40:35.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-3860" for this suite.
Dec 25 14:40:41.343: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:40:41.472: INFO: namespace sched-pred-3860 deletion completed in 6.15778489s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:7.394 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:40:41.472: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7922.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7922.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7922.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7922.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 25 14:40:53.673: INFO: File wheezy_udp@dns-test-service-3.dns-7922.svc.cluster.local from pod  dns-7922/dns-test-b9b48453-005d-4fe8-9f20-528dec9ac72a contains '' instead of 'foo.example.com.'
Dec 25 14:40:53.677: INFO: File jessie_udp@dns-test-service-3.dns-7922.svc.cluster.local from pod  dns-7922/dns-test-b9b48453-005d-4fe8-9f20-528dec9ac72a contains '' instead of 'foo.example.com.'
Dec 25 14:40:53.677: INFO: Lookups using dns-7922/dns-test-b9b48453-005d-4fe8-9f20-528dec9ac72a failed for: [wheezy_udp@dns-test-service-3.dns-7922.svc.cluster.local jessie_udp@dns-test-service-3.dns-7922.svc.cluster.local]

Dec 25 14:40:58.701: INFO: DNS probes using dns-test-b9b48453-005d-4fe8-9f20-528dec9ac72a succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7922.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7922.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7922.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7922.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 25 14:41:14.912: INFO: File wheezy_udp@dns-test-service-3.dns-7922.svc.cluster.local from pod  dns-7922/dns-test-82d13522-2c3b-4a78-9fe5-19e9a5319884 contains '' instead of 'bar.example.com.'
Dec 25 14:41:14.918: INFO: File jessie_udp@dns-test-service-3.dns-7922.svc.cluster.local from pod  dns-7922/dns-test-82d13522-2c3b-4a78-9fe5-19e9a5319884 contains '' instead of 'bar.example.com.'
Dec 25 14:41:14.918: INFO: Lookups using dns-7922/dns-test-82d13522-2c3b-4a78-9fe5-19e9a5319884 failed for: [wheezy_udp@dns-test-service-3.dns-7922.svc.cluster.local jessie_udp@dns-test-service-3.dns-7922.svc.cluster.local]

Dec 25 14:41:19.932: INFO: File wheezy_udp@dns-test-service-3.dns-7922.svc.cluster.local from pod  dns-7922/dns-test-82d13522-2c3b-4a78-9fe5-19e9a5319884 contains 'foo.example.com.
' instead of 'bar.example.com.'
Dec 25 14:41:19.940: INFO: File jessie_udp@dns-test-service-3.dns-7922.svc.cluster.local from pod  dns-7922/dns-test-82d13522-2c3b-4a78-9fe5-19e9a5319884 contains 'foo.example.com.
' instead of 'bar.example.com.'
Dec 25 14:41:19.940: INFO: Lookups using dns-7922/dns-test-82d13522-2c3b-4a78-9fe5-19e9a5319884 failed for: [wheezy_udp@dns-test-service-3.dns-7922.svc.cluster.local jessie_udp@dns-test-service-3.dns-7922.svc.cluster.local]

Dec 25 14:41:24.938: INFO: File jessie_udp@dns-test-service-3.dns-7922.svc.cluster.local from pod  dns-7922/dns-test-82d13522-2c3b-4a78-9fe5-19e9a5319884 contains 'foo.example.com.
' instead of 'bar.example.com.'
Dec 25 14:41:24.938: INFO: Lookups using dns-7922/dns-test-82d13522-2c3b-4a78-9fe5-19e9a5319884 failed for: [jessie_udp@dns-test-service-3.dns-7922.svc.cluster.local]

Dec 25 14:41:29.933: INFO: DNS probes using dns-test-82d13522-2c3b-4a78-9fe5-19e9a5319884 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7922.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-7922.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7922.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-7922.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 25 14:41:46.369: INFO: File wheezy_udp@dns-test-service-3.dns-7922.svc.cluster.local from pod  dns-7922/dns-test-e92c1b7a-0b9a-49ad-9587-e28a67faa4ab contains '' instead of '10.107.98.227'
Dec 25 14:41:46.376: INFO: File jessie_udp@dns-test-service-3.dns-7922.svc.cluster.local from pod  dns-7922/dns-test-e92c1b7a-0b9a-49ad-9587-e28a67faa4ab contains '' instead of '10.107.98.227'
Dec 25 14:41:46.376: INFO: Lookups using dns-7922/dns-test-e92c1b7a-0b9a-49ad-9587-e28a67faa4ab failed for: [wheezy_udp@dns-test-service-3.dns-7922.svc.cluster.local jessie_udp@dns-test-service-3.dns-7922.svc.cluster.local]

Dec 25 14:41:51.402: INFO: DNS probes using dns-test-e92c1b7a-0b9a-49ad-9587-e28a67faa4ab succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:41:51.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7922" for this suite.
Dec 25 14:41:59.801: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:41:59.954: INFO: namespace dns-7922 deletion completed in 8.187210823s

• [SLOW TEST:78.482 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:41:59.955: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-b535735d-9efc-44b2-a446-530a5597573b
STEP: Creating secret with name s-test-opt-upd-2154b93a-0ca4-4255-870c-354b7b3e5d78
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-b535735d-9efc-44b2-a446-530a5597573b
STEP: Updating secret s-test-opt-upd-2154b93a-0ca4-4255-870c-354b7b3e5d78
STEP: Creating secret with name s-test-opt-create-1a0ca1b5-89d0-4a8b-a48f-894f0821c7af
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:43:44.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6669" for this suite.
Dec 25 14:44:22.645: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:44:22.757: INFO: namespace secrets-6669 deletion completed in 38.215884115s

• [SLOW TEST:142.802 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:44:22.758: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-8161bfff-c635-4a22-9f7b-d06da0c5a805
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-8161bfff-c635-4a22-9f7b-d06da0c5a805
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:44:35.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3568" for this suite.
Dec 25 14:44:57.147: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:44:57.290: INFO: namespace configmap-3568 deletion completed in 22.17546451s

• [SLOW TEST:34.533 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:44:57.290: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-ac8487e4-7aee-4851-b10c-f8787b582fd9 in namespace container-probe-1254
Dec 25 14:45:07.432: INFO: Started pod liveness-ac8487e4-7aee-4851-b10c-f8787b582fd9 in namespace container-probe-1254
STEP: checking the pod's current state and verifying that restartCount is present
Dec 25 14:45:07.437: INFO: Initial restart count of pod liveness-ac8487e4-7aee-4851-b10c-f8787b582fd9 is 0
Dec 25 14:45:23.536: INFO: Restart count of pod container-probe-1254/liveness-ac8487e4-7aee-4851-b10c-f8787b582fd9 is now 1 (16.099300303s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:45:23.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1254" for this suite.
Dec 25 14:45:29.755: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:45:30.000: INFO: namespace container-probe-1254 deletion completed in 6.396721966s

• [SLOW TEST:32.710 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:45:30.001: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Dec 25 14:45:30.136: INFO: Waiting up to 5m0s for pod "pod-8cc070e0-744d-45bb-af2c-e72f59b5ebdd" in namespace "emptydir-5941" to be "success or failure"
Dec 25 14:45:30.139: INFO: Pod "pod-8cc070e0-744d-45bb-af2c-e72f59b5ebdd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.554502ms
Dec 25 14:45:32.154: INFO: Pod "pod-8cc070e0-744d-45bb-af2c-e72f59b5ebdd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017995331s
Dec 25 14:45:34.172: INFO: Pod "pod-8cc070e0-744d-45bb-af2c-e72f59b5ebdd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035865865s
Dec 25 14:45:36.187: INFO: Pod "pod-8cc070e0-744d-45bb-af2c-e72f59b5ebdd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050287259s
Dec 25 14:45:38.198: INFO: Pod "pod-8cc070e0-744d-45bb-af2c-e72f59b5ebdd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.061559599s
Dec 25 14:45:40.209: INFO: Pod "pod-8cc070e0-744d-45bb-af2c-e72f59b5ebdd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.072298555s
STEP: Saw pod success
Dec 25 14:45:40.209: INFO: Pod "pod-8cc070e0-744d-45bb-af2c-e72f59b5ebdd" satisfied condition "success or failure"
Dec 25 14:45:40.214: INFO: Trying to get logs from node iruya-node pod pod-8cc070e0-744d-45bb-af2c-e72f59b5ebdd container test-container: 
STEP: delete the pod
Dec 25 14:45:40.305: INFO: Waiting for pod pod-8cc070e0-744d-45bb-af2c-e72f59b5ebdd to disappear
Dec 25 14:45:40.331: INFO: Pod pod-8cc070e0-744d-45bb-af2c-e72f59b5ebdd no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:45:40.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5941" for this suite.
Dec 25 14:45:46.413: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:45:46.538: INFO: namespace emptydir-5941 deletion completed in 6.198756636s

• [SLOW TEST:16.537 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:45:46.539: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 25 14:45:46.736: INFO: Waiting up to 5m0s for pod "downwardapi-volume-534a4208-4873-4ee3-8867-288191a1ff2f" in namespace "downward-api-6951" to be "success or failure"
Dec 25 14:45:46.766: INFO: Pod "downwardapi-volume-534a4208-4873-4ee3-8867-288191a1ff2f": Phase="Pending", Reason="", readiness=false. Elapsed: 29.620006ms
Dec 25 14:45:48.776: INFO: Pod "downwardapi-volume-534a4208-4873-4ee3-8867-288191a1ff2f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039733605s
Dec 25 14:45:50.784: INFO: Pod "downwardapi-volume-534a4208-4873-4ee3-8867-288191a1ff2f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047328177s
Dec 25 14:45:52.795: INFO: Pod "downwardapi-volume-534a4208-4873-4ee3-8867-288191a1ff2f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059055798s
Dec 25 14:45:54.821: INFO: Pod "downwardapi-volume-534a4208-4873-4ee3-8867-288191a1ff2f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.085032359s
Dec 25 14:45:56.833: INFO: Pod "downwardapi-volume-534a4208-4873-4ee3-8867-288191a1ff2f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.096642805s
STEP: Saw pod success
Dec 25 14:45:56.833: INFO: Pod "downwardapi-volume-534a4208-4873-4ee3-8867-288191a1ff2f" satisfied condition "success or failure"
Dec 25 14:45:56.839: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-534a4208-4873-4ee3-8867-288191a1ff2f container client-container: 
STEP: delete the pod
Dec 25 14:45:56.897: INFO: Waiting for pod downwardapi-volume-534a4208-4873-4ee3-8867-288191a1ff2f to disappear
Dec 25 14:45:56.906: INFO: Pod downwardapi-volume-534a4208-4873-4ee3-8867-288191a1ff2f no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:45:56.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6951" for this suite.
Dec 25 14:46:03.091: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:46:03.238: INFO: namespace downward-api-6951 deletion completed in 6.325473856s

• [SLOW TEST:16.699 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:46:03.239: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-64d522b4-62f6-4c82-a73e-862896b66a8d in namespace container-probe-1821
Dec 25 14:46:13.454: INFO: Started pod busybox-64d522b4-62f6-4c82-a73e-862896b66a8d in namespace container-probe-1821
STEP: checking the pod's current state and verifying that restartCount is present
Dec 25 14:46:13.459: INFO: Initial restart count of pod busybox-64d522b4-62f6-4c82-a73e-862896b66a8d is 0
Dec 25 14:47:03.779: INFO: Restart count of pod container-probe-1821/busybox-64d522b4-62f6-4c82-a73e-862896b66a8d is now 1 (50.32011494s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:47:03.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1821" for this suite.
Dec 25 14:47:09.872: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:47:09.985: INFO: namespace container-probe-1821 deletion completed in 6.151115897s

• [SLOW TEST:66.747 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:47:09.986: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 25 14:47:10.134: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Dec 25 14:47:10.144: INFO: Number of nodes with available pods: 0
Dec 25 14:47:10.144: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Dec 25 14:47:10.218: INFO: Number of nodes with available pods: 0
Dec 25 14:47:10.218: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:47:11.237: INFO: Number of nodes with available pods: 0
Dec 25 14:47:11.237: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:47:12.227: INFO: Number of nodes with available pods: 0
Dec 25 14:47:12.227: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:47:13.262: INFO: Number of nodes with available pods: 0
Dec 25 14:47:13.263: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:47:14.235: INFO: Number of nodes with available pods: 0
Dec 25 14:47:14.235: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:47:15.259: INFO: Number of nodes with available pods: 0
Dec 25 14:47:15.259: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:47:16.235: INFO: Number of nodes with available pods: 0
Dec 25 14:47:16.236: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:47:17.244: INFO: Number of nodes with available pods: 0
Dec 25 14:47:17.244: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:47:18.228: INFO: Number of nodes with available pods: 0
Dec 25 14:47:18.228: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:47:19.231: INFO: Number of nodes with available pods: 1
Dec 25 14:47:19.231: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Dec 25 14:47:19.333: INFO: Number of nodes with available pods: 1
Dec 25 14:47:19.334: INFO: Number of running nodes: 0, number of available pods: 1
Dec 25 14:47:20.345: INFO: Number of nodes with available pods: 0
Dec 25 14:47:20.345: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Dec 25 14:47:20.364: INFO: Number of nodes with available pods: 0
Dec 25 14:47:20.364: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:47:21.381: INFO: Number of nodes with available pods: 0
Dec 25 14:47:21.382: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:47:22.379: INFO: Number of nodes with available pods: 0
Dec 25 14:47:22.379: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:47:23.372: INFO: Number of nodes with available pods: 0
Dec 25 14:47:23.372: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:47:24.374: INFO: Number of nodes with available pods: 0
Dec 25 14:47:24.375: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:47:25.373: INFO: Number of nodes with available pods: 0
Dec 25 14:47:25.373: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:47:26.373: INFO: Number of nodes with available pods: 0
Dec 25 14:47:26.373: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:47:27.372: INFO: Number of nodes with available pods: 0
Dec 25 14:47:27.372: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:47:28.371: INFO: Number of nodes with available pods: 0
Dec 25 14:47:28.371: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:47:29.374: INFO: Number of nodes with available pods: 0
Dec 25 14:47:29.374: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:47:30.375: INFO: Number of nodes with available pods: 0
Dec 25 14:47:30.375: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:47:31.374: INFO: Number of nodes with available pods: 0
Dec 25 14:47:31.374: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:47:32.378: INFO: Number of nodes with available pods: 0
Dec 25 14:47:32.378: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:47:33.377: INFO: Number of nodes with available pods: 0
Dec 25 14:47:33.377: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:47:34.374: INFO: Number of nodes with available pods: 0
Dec 25 14:47:34.374: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:47:35.375: INFO: Number of nodes with available pods: 0
Dec 25 14:47:35.376: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:47:36.374: INFO: Number of nodes with available pods: 0
Dec 25 14:47:36.374: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:47:37.373: INFO: Number of nodes with available pods: 0
Dec 25 14:47:37.373: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:47:38.380: INFO: Number of nodes with available pods: 0
Dec 25 14:47:38.381: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:47:39.372: INFO: Number of nodes with available pods: 0
Dec 25 14:47:39.372: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:47:40.377: INFO: Number of nodes with available pods: 0
Dec 25 14:47:40.377: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:47:41.382: INFO: Number of nodes with available pods: 0
Dec 25 14:47:41.382: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:47:42.383: INFO: Number of nodes with available pods: 0
Dec 25 14:47:42.383: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:47:43.420: INFO: Number of nodes with available pods: 0
Dec 25 14:47:43.420: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:47:44.375: INFO: Number of nodes with available pods: 0
Dec 25 14:47:44.375: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:47:45.373: INFO: Number of nodes with available pods: 0
Dec 25 14:47:45.373: INFO: Node iruya-node is running more than one daemon pod
Dec 25 14:47:46.374: INFO: Number of nodes with available pods: 1
Dec 25 14:47:46.374: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9532, will wait for the garbage collector to delete the pods
Dec 25 14:47:46.448: INFO: Deleting DaemonSet.extensions daemon-set took: 13.611898ms
Dec 25 14:47:46.748: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.403087ms
Dec 25 14:47:53.607: INFO: Number of nodes with available pods: 0
Dec 25 14:47:53.607: INFO: Number of running nodes: 0, number of available pods: 0
Dec 25 14:47:53.616: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9532/daemonsets","resourceVersion":"18029085"},"items":null}

Dec 25 14:47:53.626: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9532/pods","resourceVersion":"18029085"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:47:53.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-9532" for this suite.
Dec 25 14:47:59.759: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:47:59.983: INFO: namespace daemonsets-9532 deletion completed in 6.294502106s

• [SLOW TEST:49.997 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:47:59.984: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-b3988b9d-ba8c-4b6b-beee-564d8c11fd5e
STEP: Creating a pod to test consume configMaps
Dec 25 14:48:00.091: INFO: Waiting up to 5m0s for pod "pod-configmaps-37f656ad-1f83-4081-ba92-9440a9e5b6a9" in namespace "configmap-9045" to be "success or failure"
Dec 25 14:48:00.096: INFO: Pod "pod-configmaps-37f656ad-1f83-4081-ba92-9440a9e5b6a9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.32324ms
Dec 25 14:48:02.102: INFO: Pod "pod-configmaps-37f656ad-1f83-4081-ba92-9440a9e5b6a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010242416s
Dec 25 14:48:04.111: INFO: Pod "pod-configmaps-37f656ad-1f83-4081-ba92-9440a9e5b6a9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019241344s
Dec 25 14:48:06.122: INFO: Pod "pod-configmaps-37f656ad-1f83-4081-ba92-9440a9e5b6a9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030444422s
Dec 25 14:48:08.139: INFO: Pod "pod-configmaps-37f656ad-1f83-4081-ba92-9440a9e5b6a9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.047745488s
Dec 25 14:48:10.155: INFO: Pod "pod-configmaps-37f656ad-1f83-4081-ba92-9440a9e5b6a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.063947155s
STEP: Saw pod success
Dec 25 14:48:10.156: INFO: Pod "pod-configmaps-37f656ad-1f83-4081-ba92-9440a9e5b6a9" satisfied condition "success or failure"
Dec 25 14:48:10.160: INFO: Trying to get logs from node iruya-node pod pod-configmaps-37f656ad-1f83-4081-ba92-9440a9e5b6a9 container configmap-volume-test: 
STEP: delete the pod
Dec 25 14:48:10.222: INFO: Waiting for pod pod-configmaps-37f656ad-1f83-4081-ba92-9440a9e5b6a9 to disappear
Dec 25 14:48:10.229: INFO: Pod pod-configmaps-37f656ad-1f83-4081-ba92-9440a9e5b6a9 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:48:10.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9045" for this suite.
Dec 25 14:48:16.259: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:48:16.409: INFO: namespace configmap-9045 deletion completed in 6.169738548s

• [SLOW TEST:16.425 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:48:16.409: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292
STEP: creating an rc
Dec 25 14:48:16.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1570'
Dec 25 14:48:18.887: INFO: stderr: ""
Dec 25 14:48:18.887: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Waiting for Redis master to start.
Dec 25 14:48:19.895: INFO: Selector matched 1 pods for map[app:redis]
Dec 25 14:48:19.895: INFO: Found 0 / 1
Dec 25 14:48:20.898: INFO: Selector matched 1 pods for map[app:redis]
Dec 25 14:48:20.898: INFO: Found 0 / 1
Dec 25 14:48:21.899: INFO: Selector matched 1 pods for map[app:redis]
Dec 25 14:48:21.900: INFO: Found 0 / 1
Dec 25 14:48:22.896: INFO: Selector matched 1 pods for map[app:redis]
Dec 25 14:48:22.896: INFO: Found 0 / 1
Dec 25 14:48:23.910: INFO: Selector matched 1 pods for map[app:redis]
Dec 25 14:48:23.910: INFO: Found 0 / 1
Dec 25 14:48:24.912: INFO: Selector matched 1 pods for map[app:redis]
Dec 25 14:48:24.913: INFO: Found 0 / 1
Dec 25 14:48:25.912: INFO: Selector matched 1 pods for map[app:redis]
Dec 25 14:48:25.912: INFO: Found 0 / 1
Dec 25 14:48:26.905: INFO: Selector matched 1 pods for map[app:redis]
Dec 25 14:48:26.905: INFO: Found 0 / 1
Dec 25 14:48:27.902: INFO: Selector matched 1 pods for map[app:redis]
Dec 25 14:48:27.902: INFO: Found 1 / 1
Dec 25 14:48:27.902: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Dec 25 14:48:27.909: INFO: Selector matched 1 pods for map[app:redis]
Dec 25 14:48:27.909: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Dec 25 14:48:27.910: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-qf98v redis-master --namespace=kubectl-1570'
Dec 25 14:48:28.107: INFO: stderr: ""
Dec 25 14:48:28.108: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 25 Dec 14:48:26.051 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 25 Dec 14:48:26.052 # Server started, Redis version 3.2.12\n1:M 25 Dec 14:48:26.053 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 25 Dec 14:48:26.053 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Dec 25 14:48:28.108: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-qf98v redis-master --namespace=kubectl-1570 --tail=1'
Dec 25 14:48:28.231: INFO: stderr: ""
Dec 25 14:48:28.232: INFO: stdout: "1:M 25 Dec 14:48:26.053 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Dec 25 14:48:28.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-qf98v redis-master --namespace=kubectl-1570 --limit-bytes=1'
Dec 25 14:48:28.371: INFO: stderr: ""
Dec 25 14:48:28.371: INFO: stdout: " "
STEP: exposing timestamps
Dec 25 14:48:28.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-qf98v redis-master --namespace=kubectl-1570 --tail=1 --timestamps'
Dec 25 14:48:28.486: INFO: stderr: ""
Dec 25 14:48:28.486: INFO: stdout: "2019-12-25T14:48:26.053705602Z 1:M 25 Dec 14:48:26.053 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Dec 25 14:48:30.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-qf98v redis-master --namespace=kubectl-1570 --since=1s'
Dec 25 14:48:31.182: INFO: stderr: ""
Dec 25 14:48:31.182: INFO: stdout: ""
Dec 25 14:48:31.182: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-qf98v redis-master --namespace=kubectl-1570 --since=24h'
Dec 25 14:48:31.339: INFO: stderr: ""
Dec 25 14:48:31.339: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 25 Dec 14:48:26.051 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 25 Dec 14:48:26.052 # Server started, Redis version 3.2.12\n1:M 25 Dec 14:48:26.053 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 25 Dec 14:48:26.053 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
STEP: using delete to clean up resources
Dec 25 14:48:31.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1570'
Dec 25 14:48:31.505: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 25 14:48:31.505: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Dec 25 14:48:31.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-1570'
Dec 25 14:48:31.604: INFO: stderr: "No resources found.\n"
Dec 25 14:48:31.604: INFO: stdout: ""
Dec 25 14:48:31.604: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-1570 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 25 14:48:31.756: INFO: stderr: ""
Dec 25 14:48:31.756: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:48:31.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1570" for this suite.
Dec 25 14:48:53.780: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:48:53.922: INFO: namespace kubectl-1570 deletion completed in 22.160467912s

• [SLOW TEST:37.513 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:48:53.923: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 25 14:49:02.160: INFO: Waiting up to 5m0s for pod "client-envvars-10dd58a1-cf91-4fd9-849b-d5f98459a4e6" in namespace "pods-3905" to be "success or failure"
Dec 25 14:49:02.177: INFO: Pod "client-envvars-10dd58a1-cf91-4fd9-849b-d5f98459a4e6": Phase="Pending", Reason="", readiness=false. Elapsed: 16.677851ms
Dec 25 14:49:04.183: INFO: Pod "client-envvars-10dd58a1-cf91-4fd9-849b-d5f98459a4e6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023170492s
Dec 25 14:49:06.192: INFO: Pod "client-envvars-10dd58a1-cf91-4fd9-849b-d5f98459a4e6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031939428s
Dec 25 14:49:08.200: INFO: Pod "client-envvars-10dd58a1-cf91-4fd9-849b-d5f98459a4e6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039511961s
Dec 25 14:49:10.208: INFO: Pod "client-envvars-10dd58a1-cf91-4fd9-849b-d5f98459a4e6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.04828777s
Dec 25 14:49:12.214: INFO: Pod "client-envvars-10dd58a1-cf91-4fd9-849b-d5f98459a4e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.054200484s
STEP: Saw pod success
Dec 25 14:49:12.215: INFO: Pod "client-envvars-10dd58a1-cf91-4fd9-849b-d5f98459a4e6" satisfied condition "success or failure"
Dec 25 14:49:12.218: INFO: Trying to get logs from node iruya-node pod client-envvars-10dd58a1-cf91-4fd9-849b-d5f98459a4e6 container env3cont: 
STEP: delete the pod
Dec 25 14:49:12.377: INFO: Waiting for pod client-envvars-10dd58a1-cf91-4fd9-849b-d5f98459a4e6 to disappear
Dec 25 14:49:12.414: INFO: Pod client-envvars-10dd58a1-cf91-4fd9-849b-d5f98459a4e6 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:49:12.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3905" for this suite.
Dec 25 14:49:54.440: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:49:54.651: INFO: namespace pods-3905 deletion completed in 42.232404074s

• [SLOW TEST:60.728 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:49:54.651: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-2293/configmap-test-67930ea4-9033-4472-946e-0e757a314ea7
STEP: Creating a pod to test consume configMaps
Dec 25 14:49:54.923: INFO: Waiting up to 5m0s for pod "pod-configmaps-d866fc57-94c1-4da4-b71d-a6c5db409e07" in namespace "configmap-2293" to be "success or failure"
Dec 25 14:49:54.971: INFO: Pod "pod-configmaps-d866fc57-94c1-4da4-b71d-a6c5db409e07": Phase="Pending", Reason="", readiness=false. Elapsed: 47.615399ms
Dec 25 14:49:56.994: INFO: Pod "pod-configmaps-d866fc57-94c1-4da4-b71d-a6c5db409e07": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070860495s
Dec 25 14:49:59.009: INFO: Pod "pod-configmaps-d866fc57-94c1-4da4-b71d-a6c5db409e07": Phase="Pending", Reason="", readiness=false. Elapsed: 4.086229919s
Dec 25 14:50:01.024: INFO: Pod "pod-configmaps-d866fc57-94c1-4da4-b71d-a6c5db409e07": Phase="Pending", Reason="", readiness=false. Elapsed: 6.100750952s
Dec 25 14:50:03.042: INFO: Pod "pod-configmaps-d866fc57-94c1-4da4-b71d-a6c5db409e07": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.118835586s
STEP: Saw pod success
Dec 25 14:50:03.042: INFO: Pod "pod-configmaps-d866fc57-94c1-4da4-b71d-a6c5db409e07" satisfied condition "success or failure"
Dec 25 14:50:03.052: INFO: Trying to get logs from node iruya-node pod pod-configmaps-d866fc57-94c1-4da4-b71d-a6c5db409e07 container env-test: 
STEP: delete the pod
Dec 25 14:50:03.099: INFO: Waiting for pod pod-configmaps-d866fc57-94c1-4da4-b71d-a6c5db409e07 to disappear
Dec 25 14:50:03.120: INFO: Pod pod-configmaps-d866fc57-94c1-4da4-b71d-a6c5db409e07 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:50:03.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2293" for this suite.
Dec 25 14:50:09.151: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:50:09.327: INFO: namespace configmap-2293 deletion completed in 6.200382654s

• [SLOW TEST:14.676 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:50:09.328: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-d9880a07-9ea1-49c5-9bf0-819c387cc1cf in namespace container-probe-858
Dec 25 14:50:19.510: INFO: Started pod busybox-d9880a07-9ea1-49c5-9bf0-819c387cc1cf in namespace container-probe-858
STEP: checking the pod's current state and verifying that restartCount is present
Dec 25 14:50:19.515: INFO: Initial restart count of pod busybox-d9880a07-9ea1-49c5-9bf0-819c387cc1cf is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:54:20.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-858" for this suite.
Dec 25 14:54:26.313: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:54:26.453: INFO: namespace container-probe-858 deletion completed in 6.186855985s

• [SLOW TEST:257.126 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:54:26.454: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Dec 25 14:54:26.616: INFO: Waiting up to 5m0s for pod "pod-4c77b1ba-d18d-46d0-af96-3e50688fac2f" in namespace "emptydir-8324" to be "success or failure"
Dec 25 14:54:26.644: INFO: Pod "pod-4c77b1ba-d18d-46d0-af96-3e50688fac2f": Phase="Pending", Reason="", readiness=false. Elapsed: 27.790458ms
Dec 25 14:54:28.657: INFO: Pod "pod-4c77b1ba-d18d-46d0-af96-3e50688fac2f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040542003s
Dec 25 14:54:30.674: INFO: Pod "pod-4c77b1ba-d18d-46d0-af96-3e50688fac2f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057569064s
Dec 25 14:54:32.680: INFO: Pod "pod-4c77b1ba-d18d-46d0-af96-3e50688fac2f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063191797s
Dec 25 14:54:34.687: INFO: Pod "pod-4c77b1ba-d18d-46d0-af96-3e50688fac2f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.070474128s
STEP: Saw pod success
Dec 25 14:54:34.687: INFO: Pod "pod-4c77b1ba-d18d-46d0-af96-3e50688fac2f" satisfied condition "success or failure"
Dec 25 14:54:34.708: INFO: Trying to get logs from node iruya-node pod pod-4c77b1ba-d18d-46d0-af96-3e50688fac2f container test-container: 
STEP: delete the pod
Dec 25 14:54:34.791: INFO: Waiting for pod pod-4c77b1ba-d18d-46d0-af96-3e50688fac2f to disappear
Dec 25 14:54:34.796: INFO: Pod pod-4c77b1ba-d18d-46d0-af96-3e50688fac2f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:54:34.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8324" for this suite.
Dec 25 14:54:40.863: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:54:41.014: INFO: namespace emptydir-8324 deletion completed in 6.212800432s

• [SLOW TEST:14.560 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:54:41.015: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-fbf6413b-2f17-487a-983d-f06e76d0fa90
STEP: Creating a pod to test consume secrets
Dec 25 14:54:41.297: INFO: Waiting up to 5m0s for pod "pod-secrets-4dd178d2-19c2-4f2c-b485-5b185edb6af5" in namespace "secrets-8319" to be "success or failure"
Dec 25 14:54:41.317: INFO: Pod "pod-secrets-4dd178d2-19c2-4f2c-b485-5b185edb6af5": Phase="Pending", Reason="", readiness=false. Elapsed: 18.888342ms
Dec 25 14:54:43.328: INFO: Pod "pod-secrets-4dd178d2-19c2-4f2c-b485-5b185edb6af5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030349692s
Dec 25 14:54:45.334: INFO: Pod "pod-secrets-4dd178d2-19c2-4f2c-b485-5b185edb6af5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035830053s
Dec 25 14:54:47.353: INFO: Pod "pod-secrets-4dd178d2-19c2-4f2c-b485-5b185edb6af5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055245122s
Dec 25 14:54:49.377: INFO: Pod "pod-secrets-4dd178d2-19c2-4f2c-b485-5b185edb6af5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.079486798s
Dec 25 14:54:51.385: INFO: Pod "pod-secrets-4dd178d2-19c2-4f2c-b485-5b185edb6af5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.087545272s
STEP: Saw pod success
Dec 25 14:54:51.385: INFO: Pod "pod-secrets-4dd178d2-19c2-4f2c-b485-5b185edb6af5" satisfied condition "success or failure"
Dec 25 14:54:51.389: INFO: Trying to get logs from node iruya-node pod pod-secrets-4dd178d2-19c2-4f2c-b485-5b185edb6af5 container secret-volume-test: 
STEP: delete the pod
Dec 25 14:54:51.585: INFO: Waiting for pod pod-secrets-4dd178d2-19c2-4f2c-b485-5b185edb6af5 to disappear
Dec 25 14:54:51.598: INFO: Pod pod-secrets-4dd178d2-19c2-4f2c-b485-5b185edb6af5 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:54:51.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8319" for this suite.
Dec 25 14:54:57.748: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:54:57.925: INFO: namespace secrets-8319 deletion completed in 6.31743606s

• [SLOW TEST:16.910 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:54:57.926: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 25 14:54:58.086: INFO: Pod name rollover-pod: Found 0 pods out of 1
Dec 25 14:55:03.094: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Dec 25 14:55:07.112: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Dec 25 14:55:09.123: INFO: Creating deployment "test-rollover-deployment"
Dec 25 14:55:09.160: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Dec 25 14:55:11.198: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Dec 25 14:55:11.223: INFO: Ensure that both replica sets have 1 created replica
Dec 25 14:55:11.229: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Dec 25 14:55:11.247: INFO: Updating deployment test-rollover-deployment
Dec 25 14:55:11.247: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Dec 25 14:55:13.840: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Dec 25 14:55:13.853: INFO: Make sure deployment "test-rollover-deployment" is complete
Dec 25 14:55:13.873: INFO: all replica sets need to contain the pod-template-hash label
Dec 25 14:55:13.874: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712882509, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712882509, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712882512, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712882509, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 25 14:55:15.897: INFO: all replica sets need to contain the pod-template-hash label
Dec 25 14:55:15.897: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712882509, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712882509, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712882512, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712882509, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 25 14:55:17.888: INFO: all replica sets need to contain the pod-template-hash label
Dec 25 14:55:17.888: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712882509, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712882509, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712882512, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712882509, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 25 14:55:19.932: INFO: all replica sets need to contain the pod-template-hash label
Dec 25 14:55:19.932: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712882509, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712882509, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712882512, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712882509, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 25 14:55:21.897: INFO: all replica sets need to contain the pod-template-hash label
Dec 25 14:55:21.898: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712882509, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712882509, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712882521, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712882509, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 25 14:55:23.904: INFO: all replica sets need to contain the pod-template-hash label
Dec 25 14:55:23.904: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712882509, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712882509, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712882521, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712882509, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 25 14:55:25.889: INFO: all replica sets need to contain the pod-template-hash label
Dec 25 14:55:25.889: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712882509, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712882509, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712882521, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712882509, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 25 14:55:27.896: INFO: all replica sets need to contain the pod-template-hash label
Dec 25 14:55:27.896: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712882509, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712882509, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712882521, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712882509, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 25 14:55:29.891: INFO: all replica sets need to contain the pod-template-hash label
Dec 25 14:55:29.891: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712882509, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712882509, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712882521, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712882509, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 25 14:55:31.897: INFO: 
Dec 25 14:55:31.897: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Dec 25 14:55:31.905: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-5489,SelfLink:/apis/apps/v1/namespaces/deployment-5489/deployments/test-rollover-deployment,UID:2ed54c42-16d3-4468-ad5c-3653e50ebddd,ResourceVersion:18029981,Generation:2,CreationTimestamp:2019-12-25 14:55:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-12-25 14:55:09 +0000 UTC 2019-12-25 14:55:09 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-12-25 14:55:31 +0000 UTC 2019-12-25 14:55:09 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Dec 25 14:55:31.909: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-5489,SelfLink:/apis/apps/v1/namespaces/deployment-5489/replicasets/test-rollover-deployment-854595fc44,UID:4041b045-1345-4bed-998d-36292605d2b6,ResourceVersion:18029971,Generation:2,CreationTimestamp:2019-12-25 14:55:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 2ed54c42-16d3-4468-ad5c-3653e50ebddd 0xc002af3e47 0xc002af3e48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Dec 25 14:55:31.909: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Dec 25 14:55:31.909: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-5489,SelfLink:/apis/apps/v1/namespaces/deployment-5489/replicasets/test-rollover-controller,UID:00674518-7656-4949-9af9-130db73d11fa,ResourceVersion:18029980,Generation:2,CreationTimestamp:2019-12-25 14:54:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 2ed54c42-16d3-4468-ad5c-3653e50ebddd 0xc002af3d77 0xc002af3d78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 25 14:55:31.909: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-5489,SelfLink:/apis/apps/v1/namespaces/deployment-5489/replicasets/test-rollover-deployment-9b8b997cf,UID:445d9c03-a136-431d-9238-b7f35f073aad,ResourceVersion:18029931,Generation:2,CreationTimestamp:2019-12-25 14:55:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 2ed54c42-16d3-4468-ad5c-3653e50ebddd 0xc002af3f20 0xc002af3f21}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 25 14:55:31.915: INFO: Pod "test-rollover-deployment-854595fc44-n5rhq" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-n5rhq,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-5489,SelfLink:/api/v1/namespaces/deployment-5489/pods/test-rollover-deployment-854595fc44-n5rhq,UID:1922964e-d28e-4e10-ac82-9f39a505eb98,ResourceVersion:18029955,Generation:0,CreationTimestamp:2019-12-25 14:55:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 4041b045-1345-4bed-998d-36292605d2b6 0xc002d5f3f7 0xc002d5f3f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c4c2r {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c4c2r,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-c4c2r true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d5f470} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d5f490}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:55:12 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:55:21 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:55:21 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-25 14:55:12 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2019-12-25 14:55:12 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-12-25 14:55:20 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://d6f6f5b34afc8c304a0e3a25732735c7b16692c3c51c5d3316d61d64afeee9aa}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:55:31.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-5489" for this suite.
Dec 25 14:55:38.009: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:55:38.105: INFO: namespace deployment-5489 deletion completed in 6.178724505s

• [SLOW TEST:40.179 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:55:38.105: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting the proxy server
Dec 25 14:55:38.212: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:55:38.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4804" for this suite.
Dec 25 14:55:44.344: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:55:44.472: INFO: namespace kubectl-4804 deletion completed in 6.160585498s

• [SLOW TEST:6.367 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:55:44.473: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-2739
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 25 14:55:44.568: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 25 14:56:18.980: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-2739 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 25 14:56:18.981: INFO: >>> kubeConfig: /root/.kube/config
Dec 25 14:56:19.599: INFO: Waiting for endpoints: map[]
Dec 25 14:56:19.607: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-2739 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 25 14:56:19.607: INFO: >>> kubeConfig: /root/.kube/config
Dec 25 14:56:19.966: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:56:19.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-2739" for this suite.
Dec 25 14:56:38.048: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:56:38.175: INFO: namespace pod-network-test-2739 deletion completed in 18.197386226s

• [SLOW TEST:53.702 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:56:38.176: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 25 14:56:38.223: INFO: Creating ReplicaSet my-hostname-basic-674de24f-9d73-42ca-bef6-6373a28fdb2a
Dec 25 14:56:38.248: INFO: Pod name my-hostname-basic-674de24f-9d73-42ca-bef6-6373a28fdb2a: Found 0 pods out of 1
Dec 25 14:56:43.261: INFO: Pod name my-hostname-basic-674de24f-9d73-42ca-bef6-6373a28fdb2a: Found 1 pods out of 1
Dec 25 14:56:43.261: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-674de24f-9d73-42ca-bef6-6373a28fdb2a" is running
Dec 25 14:56:47.279: INFO: Pod "my-hostname-basic-674de24f-9d73-42ca-bef6-6373a28fdb2a-5qrn4" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-25 14:56:38 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-25 14:56:38 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-674de24f-9d73-42ca-bef6-6373a28fdb2a]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-25 14:56:38 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-674de24f-9d73-42ca-bef6-6373a28fdb2a]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-25 14:56:38 +0000 UTC Reason: Message:}])
Dec 25 14:56:47.279: INFO: Trying to dial the pod
Dec 25 14:56:52.314: INFO: Controller my-hostname-basic-674de24f-9d73-42ca-bef6-6373a28fdb2a: Got expected result from replica 1 [my-hostname-basic-674de24f-9d73-42ca-bef6-6373a28fdb2a-5qrn4]: "my-hostname-basic-674de24f-9d73-42ca-bef6-6373a28fdb2a-5qrn4", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:56:52.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-7227" for this suite.
Dec 25 14:56:58.366: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:56:58.524: INFO: namespace replicaset-7227 deletion completed in 6.205428822s

• [SLOW TEST:20.348 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:56:58.524: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Dec 25 14:56:58.747: INFO: Waiting up to 5m0s for pod "downward-api-555a006a-ffc2-4c53-a535-daad86e31279" in namespace "downward-api-43" to be "success or failure"
Dec 25 14:56:58.763: INFO: Pod "downward-api-555a006a-ffc2-4c53-a535-daad86e31279": Phase="Pending", Reason="", readiness=false. Elapsed: 15.669346ms
Dec 25 14:57:00.776: INFO: Pod "downward-api-555a006a-ffc2-4c53-a535-daad86e31279": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028629259s
Dec 25 14:57:02.795: INFO: Pod "downward-api-555a006a-ffc2-4c53-a535-daad86e31279": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047154792s
Dec 25 14:57:04.805: INFO: Pod "downward-api-555a006a-ffc2-4c53-a535-daad86e31279": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057177964s
Dec 25 14:57:06.817: INFO: Pod "downward-api-555a006a-ffc2-4c53-a535-daad86e31279": Phase="Pending", Reason="", readiness=false. Elapsed: 8.069419593s
Dec 25 14:57:08.830: INFO: Pod "downward-api-555a006a-ffc2-4c53-a535-daad86e31279": Phase="Pending", Reason="", readiness=false. Elapsed: 10.082791261s
Dec 25 14:57:10.851: INFO: Pod "downward-api-555a006a-ffc2-4c53-a535-daad86e31279": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.103182695s
STEP: Saw pod success
Dec 25 14:57:10.851: INFO: Pod "downward-api-555a006a-ffc2-4c53-a535-daad86e31279" satisfied condition "success or failure"
Dec 25 14:57:10.868: INFO: Trying to get logs from node iruya-node pod downward-api-555a006a-ffc2-4c53-a535-daad86e31279 container dapi-container: 
STEP: delete the pod
Dec 25 14:57:10.987: INFO: Waiting for pod downward-api-555a006a-ffc2-4c53-a535-daad86e31279 to disappear
Dec 25 14:57:10.994: INFO: Pod downward-api-555a006a-ffc2-4c53-a535-daad86e31279 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:57:10.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-43" for this suite.
Dec 25 14:57:17.067: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:57:17.184: INFO: namespace downward-api-43 deletion completed in 6.181997132s

• [SLOW TEST:18.660 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:57:17.184: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-f3226c17-10e7-4cf7-9c49-467d0a94fab5 in namespace container-probe-6034
Dec 25 14:57:27.365: INFO: Started pod liveness-f3226c17-10e7-4cf7-9c49-467d0a94fab5 in namespace container-probe-6034
STEP: checking the pod's current state and verifying that restartCount is present
Dec 25 14:57:27.370: INFO: Initial restart count of pod liveness-f3226c17-10e7-4cf7-9c49-467d0a94fab5 is 0
Dec 25 14:57:47.474: INFO: Restart count of pod container-probe-6034/liveness-f3226c17-10e7-4cf7-9c49-467d0a94fab5 is now 1 (20.104327944s elapsed)
Dec 25 14:58:09.579: INFO: Restart count of pod container-probe-6034/liveness-f3226c17-10e7-4cf7-9c49-467d0a94fab5 is now 2 (42.208648904s elapsed)
Dec 25 14:58:29.664: INFO: Restart count of pod container-probe-6034/liveness-f3226c17-10e7-4cf7-9c49-467d0a94fab5 is now 3 (1m2.293877946s elapsed)
Dec 25 14:58:49.785: INFO: Restart count of pod container-probe-6034/liveness-f3226c17-10e7-4cf7-9c49-467d0a94fab5 is now 4 (1m22.415326387s elapsed)
Dec 25 14:59:48.129: INFO: Restart count of pod container-probe-6034/liveness-f3226c17-10e7-4cf7-9c49-467d0a94fab5 is now 5 (2m20.758591781s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 14:59:48.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-6034" for this suite.
Dec 25 14:59:54.287: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 14:59:54.435: INFO: namespace container-probe-6034 deletion completed in 6.24225978s

• [SLOW TEST:157.251 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 14:59:54.437: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 25 14:59:54.549: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-5164'
Dec 25 14:59:56.572: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 25 14:59:56.572: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Dec 25 14:59:56.650: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-2w75n]
Dec 25 14:59:56.651: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-2w75n" in namespace "kubectl-5164" to be "running and ready"
Dec 25 14:59:56.674: INFO: Pod "e2e-test-nginx-rc-2w75n": Phase="Pending", Reason="", readiness=false. Elapsed: 23.253048ms
Dec 25 14:59:58.686: INFO: Pod "e2e-test-nginx-rc-2w75n": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035611314s
Dec 25 15:00:00.693: INFO: Pod "e2e-test-nginx-rc-2w75n": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041914214s
Dec 25 15:00:02.706: INFO: Pod "e2e-test-nginx-rc-2w75n": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055083784s
Dec 25 15:00:04.711: INFO: Pod "e2e-test-nginx-rc-2w75n": Phase="Pending", Reason="", readiness=false. Elapsed: 8.059995324s
Dec 25 15:00:06.718: INFO: Pod "e2e-test-nginx-rc-2w75n": Phase="Running", Reason="", readiness=true. Elapsed: 10.067690983s
Dec 25 15:00:06.719: INFO: Pod "e2e-test-nginx-rc-2w75n" satisfied condition "running and ready"
Dec 25 15:00:06.719: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-2w75n]
Dec 25 15:00:06.719: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-5164'
Dec 25 15:00:06.914: INFO: stderr: ""
Dec 25 15:00:06.914: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461
Dec 25 15:00:06.914: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-5164'
Dec 25 15:00:07.136: INFO: stderr: ""
Dec 25 15:00:07.136: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 15:00:07.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5164" for this suite.
Dec 25 15:00:29.185: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 15:00:29.358: INFO: namespace kubectl-5164 deletion completed in 22.202969463s

• [SLOW TEST:34.921 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 15:00:29.360: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating cluster-info
Dec 25 15:00:29.450: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Dec 25 15:00:29.547: INFO: stderr: ""
Dec 25 15:00:29.547: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 15:00:29.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9039" for this suite.
Dec 25 15:00:35.716: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 15:00:35.922: INFO: namespace kubectl-9039 deletion completed in 6.351592425s

• [SLOW TEST:6.563 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 15:00:35.924: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-4f66c14b-9395-4978-837b-d2f6b893ba3c
STEP: Creating a pod to test consume secrets
Dec 25 15:00:36.351: INFO: Waiting up to 5m0s for pod "pod-secrets-40f15cca-e4b9-4372-a4e5-9d00ab438fd4" in namespace "secrets-4690" to be "success or failure"
Dec 25 15:00:36.364: INFO: Pod "pod-secrets-40f15cca-e4b9-4372-a4e5-9d00ab438fd4": Phase="Pending", Reason="", readiness=false. Elapsed: 12.534079ms
Dec 25 15:00:38.373: INFO: Pod "pod-secrets-40f15cca-e4b9-4372-a4e5-9d00ab438fd4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022181831s
Dec 25 15:00:40.381: INFO: Pod "pod-secrets-40f15cca-e4b9-4372-a4e5-9d00ab438fd4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029684217s
Dec 25 15:00:42.389: INFO: Pod "pod-secrets-40f15cca-e4b9-4372-a4e5-9d00ab438fd4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038239043s
Dec 25 15:00:44.401: INFO: Pod "pod-secrets-40f15cca-e4b9-4372-a4e5-9d00ab438fd4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.049853881s
Dec 25 15:00:46.410: INFO: Pod "pod-secrets-40f15cca-e4b9-4372-a4e5-9d00ab438fd4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.059270473s
STEP: Saw pod success
Dec 25 15:00:46.410: INFO: Pod "pod-secrets-40f15cca-e4b9-4372-a4e5-9d00ab438fd4" satisfied condition "success or failure"
Dec 25 15:00:46.416: INFO: Trying to get logs from node iruya-node pod pod-secrets-40f15cca-e4b9-4372-a4e5-9d00ab438fd4 container secret-volume-test: 
STEP: delete the pod
Dec 25 15:00:46.520: INFO: Waiting for pod pod-secrets-40f15cca-e4b9-4372-a4e5-9d00ab438fd4 to disappear
Dec 25 15:00:46.607: INFO: Pod pod-secrets-40f15cca-e4b9-4372-a4e5-9d00ab438fd4 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 15:00:46.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4690" for this suite.
Dec 25 15:00:52.656: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 15:00:52.753: INFO: namespace secrets-4690 deletion completed in 6.131129057s
STEP: Destroying namespace "secret-namespace-6978" for this suite.
Dec 25 15:00:58.777: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 15:00:58.893: INFO: namespace secret-namespace-6978 deletion completed in 6.140376755s

• [SLOW TEST:22.969 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 15:00:58.894: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 25 15:00:59.495: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2f6fe4a4-f7cb-45f8-baf7-c0cab9228b62" in namespace "downward-api-2163" to be "success or failure"
Dec 25 15:00:59.532: INFO: Pod "downwardapi-volume-2f6fe4a4-f7cb-45f8-baf7-c0cab9228b62": Phase="Pending", Reason="", readiness=false. Elapsed: 36.599001ms
Dec 25 15:01:01.545: INFO: Pod "downwardapi-volume-2f6fe4a4-f7cb-45f8-baf7-c0cab9228b62": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049533684s
Dec 25 15:01:03.553: INFO: Pod "downwardapi-volume-2f6fe4a4-f7cb-45f8-baf7-c0cab9228b62": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057307256s
Dec 25 15:01:05.562: INFO: Pod "downwardapi-volume-2f6fe4a4-f7cb-45f8-baf7-c0cab9228b62": Phase="Pending", Reason="", readiness=false. Elapsed: 6.065936779s
Dec 25 15:01:07.570: INFO: Pod "downwardapi-volume-2f6fe4a4-f7cb-45f8-baf7-c0cab9228b62": Phase="Pending", Reason="", readiness=false. Elapsed: 8.073943092s
Dec 25 15:01:09.582: INFO: Pod "downwardapi-volume-2f6fe4a4-f7cb-45f8-baf7-c0cab9228b62": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.08680032s
STEP: Saw pod success
Dec 25 15:01:09.583: INFO: Pod "downwardapi-volume-2f6fe4a4-f7cb-45f8-baf7-c0cab9228b62" satisfied condition "success or failure"
Dec 25 15:01:09.589: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-2f6fe4a4-f7cb-45f8-baf7-c0cab9228b62 container client-container: 
STEP: delete the pod
Dec 25 15:01:09.852: INFO: Waiting for pod downwardapi-volume-2f6fe4a4-f7cb-45f8-baf7-c0cab9228b62 to disappear
Dec 25 15:01:09.864: INFO: Pod downwardapi-volume-2f6fe4a4-f7cb-45f8-baf7-c0cab9228b62 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 15:01:09.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2163" for this suite.
Dec 25 15:01:15.931: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 15:01:16.084: INFO: namespace downward-api-2163 deletion completed in 6.205848515s

• [SLOW TEST:17.190 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 15:01:16.086: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name secret-emptykey-test-ecb8b86e-d3b0-438a-8b65-311e9044f7dd
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 15:01:16.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3109" for this suite.
Dec 25 15:01:22.208: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 15:01:22.291: INFO: namespace secrets-3109 deletion completed in 6.121024428s

• [SLOW TEST:6.205 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 15:01:22.291: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Dec 25 15:01:22.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5936'
Dec 25 15:01:22.780: INFO: stderr: ""
Dec 25 15:01:22.781: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Dec 25 15:01:23.795: INFO: Selector matched 1 pods for map[app:redis]
Dec 25 15:01:23.795: INFO: Found 0 / 1
Dec 25 15:01:24.790: INFO: Selector matched 1 pods for map[app:redis]
Dec 25 15:01:24.790: INFO: Found 0 / 1
Dec 25 15:01:25.797: INFO: Selector matched 1 pods for map[app:redis]
Dec 25 15:01:25.797: INFO: Found 0 / 1
Dec 25 15:01:26.788: INFO: Selector matched 1 pods for map[app:redis]
Dec 25 15:01:26.788: INFO: Found 0 / 1
Dec 25 15:01:27.802: INFO: Selector matched 1 pods for map[app:redis]
Dec 25 15:01:27.802: INFO: Found 0 / 1
Dec 25 15:01:28.792: INFO: Selector matched 1 pods for map[app:redis]
Dec 25 15:01:28.792: INFO: Found 0 / 1
Dec 25 15:01:29.795: INFO: Selector matched 1 pods for map[app:redis]
Dec 25 15:01:29.795: INFO: Found 0 / 1
Dec 25 15:01:30.804: INFO: Selector matched 1 pods for map[app:redis]
Dec 25 15:01:30.805: INFO: Found 1 / 1
Dec 25 15:01:30.805: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Dec 25 15:01:30.815: INFO: Selector matched 1 pods for map[app:redis]
Dec 25 15:01:30.815: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Dec 25 15:01:30.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-c7l5l --namespace=kubectl-5936 -p {"metadata":{"annotations":{"x":"y"}}}'
Dec 25 15:01:30.927: INFO: stderr: ""
Dec 25 15:01:30.927: INFO: stdout: "pod/redis-master-c7l5l patched\n"
STEP: checking annotations
Dec 25 15:01:30.931: INFO: Selector matched 1 pods for map[app:redis]
Dec 25 15:01:30.931: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 15:01:30.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5936" for this suite.
Dec 25 15:01:53.064: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 15:01:53.236: INFO: namespace kubectl-5936 deletion completed in 22.24274825s

• [SLOW TEST:30.945 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 15:01:53.237: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W1225 15:02:38.093832       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 25 15:02:38.093: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 15:02:38.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8620" for this suite.
Dec 25 15:02:48.144: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 15:02:48.277: INFO: namespace gc-8620 deletion completed in 10.178647048s

• [SLOW TEST:55.041 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 15:02:48.278: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-9445d1cb-0161-4720-8cb6-9cd34b813933
STEP: Creating a pod to test consume secrets
Dec 25 15:02:48.922: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3ffb402b-ed70-44b4-8167-41d5f6886799" in namespace "projected-7045" to be "success or failure"
Dec 25 15:02:48.953: INFO: Pod "pod-projected-secrets-3ffb402b-ed70-44b4-8167-41d5f6886799": Phase="Pending", Reason="", readiness=false. Elapsed: 30.9616ms
Dec 25 15:02:51.169: INFO: Pod "pod-projected-secrets-3ffb402b-ed70-44b4-8167-41d5f6886799": Phase="Pending", Reason="", readiness=false. Elapsed: 2.246812681s
Dec 25 15:02:53.410: INFO: Pod "pod-projected-secrets-3ffb402b-ed70-44b4-8167-41d5f6886799": Phase="Pending", Reason="", readiness=false. Elapsed: 4.487558524s
Dec 25 15:02:55.433: INFO: Pod "pod-projected-secrets-3ffb402b-ed70-44b4-8167-41d5f6886799": Phase="Pending", Reason="", readiness=false. Elapsed: 6.510699563s
Dec 25 15:02:57.443: INFO: Pod "pod-projected-secrets-3ffb402b-ed70-44b4-8167-41d5f6886799": Phase="Pending", Reason="", readiness=false. Elapsed: 8.520978426s
Dec 25 15:02:59.450: INFO: Pod "pod-projected-secrets-3ffb402b-ed70-44b4-8167-41d5f6886799": Phase="Pending", Reason="", readiness=false. Elapsed: 10.52792408s
Dec 25 15:03:01.458: INFO: Pod "pod-projected-secrets-3ffb402b-ed70-44b4-8167-41d5f6886799": Phase="Pending", Reason="", readiness=false. Elapsed: 12.536205513s
Dec 25 15:03:03.466: INFO: Pod "pod-projected-secrets-3ffb402b-ed70-44b4-8167-41d5f6886799": Phase="Pending", Reason="", readiness=false. Elapsed: 14.544288079s
Dec 25 15:03:05.479: INFO: Pod "pod-projected-secrets-3ffb402b-ed70-44b4-8167-41d5f6886799": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.557371589s
STEP: Saw pod success
Dec 25 15:03:05.480: INFO: Pod "pod-projected-secrets-3ffb402b-ed70-44b4-8167-41d5f6886799" satisfied condition "success or failure"
Dec 25 15:03:05.485: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-3ffb402b-ed70-44b4-8167-41d5f6886799 container projected-secret-volume-test: 
STEP: delete the pod
Dec 25 15:03:05.710: INFO: Waiting for pod pod-projected-secrets-3ffb402b-ed70-44b4-8167-41d5f6886799 to disappear
Dec 25 15:03:05.722: INFO: Pod pod-projected-secrets-3ffb402b-ed70-44b4-8167-41d5f6886799 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 15:03:05.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7045" for this suite.
Dec 25 15:03:11.802: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 15:03:11.961: INFO: namespace projected-7045 deletion completed in 6.224689475s

• [SLOW TEST:23.684 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 15:03:11.962: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Dec 25 15:03:22.799: INFO: Successfully updated pod "labelsupdate8010e4bc-9aa2-4b13-bd5a-d66a7481b4c0"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 15:03:24.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5852" for this suite.
Dec 25 15:04:04.955: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 15:04:05.062: INFO: namespace downward-api-5852 deletion completed in 40.136237491s

• [SLOW TEST:53.100 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 15:04:05.063: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Dec 25 15:04:05.235: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-7721,SelfLink:/api/v1/namespaces/watch-7721/configmaps/e2e-watch-test-label-changed,UID:0482db76-aa01-4b5c-abf3-ae0d900d97b9,ResourceVersion:18031241,Generation:0,CreationTimestamp:2019-12-25 15:04:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 25 15:04:05.235: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-7721,SelfLink:/api/v1/namespaces/watch-7721/configmaps/e2e-watch-test-label-changed,UID:0482db76-aa01-4b5c-abf3-ae0d900d97b9,ResourceVersion:18031242,Generation:0,CreationTimestamp:2019-12-25 15:04:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Dec 25 15:04:05.235: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-7721,SelfLink:/api/v1/namespaces/watch-7721/configmaps/e2e-watch-test-label-changed,UID:0482db76-aa01-4b5c-abf3-ae0d900d97b9,ResourceVersion:18031243,Generation:0,CreationTimestamp:2019-12-25 15:04:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Dec 25 15:04:15.341: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-7721,SelfLink:/api/v1/namespaces/watch-7721/configmaps/e2e-watch-test-label-changed,UID:0482db76-aa01-4b5c-abf3-ae0d900d97b9,ResourceVersion:18031258,Generation:0,CreationTimestamp:2019-12-25 15:04:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 25 15:04:15.342: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-7721,SelfLink:/api/v1/namespaces/watch-7721/configmaps/e2e-watch-test-label-changed,UID:0482db76-aa01-4b5c-abf3-ae0d900d97b9,ResourceVersion:18031259,Generation:0,CreationTimestamp:2019-12-25 15:04:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Dec 25 15:04:15.342: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-7721,SelfLink:/api/v1/namespaces/watch-7721/configmaps/e2e-watch-test-label-changed,UID:0482db76-aa01-4b5c-abf3-ae0d900d97b9,ResourceVersion:18031260,Generation:0,CreationTimestamp:2019-12-25 15:04:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 15:04:15.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-7721" for this suite.
Dec 25 15:04:21.395: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 15:04:21.566: INFO: namespace watch-7721 deletion completed in 6.212701344s

• [SLOW TEST:16.503 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 15:04:21.567: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 25 15:04:21.748: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"aaa7708b-45f4-4dc3-abf7-0f4ea62aba80", Controller:(*bool)(0xc00236e702), BlockOwnerDeletion:(*bool)(0xc00236e703)}}
Dec 25 15:04:21.764: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"6344a67f-e162-4438-b743-12485a5aafbd", Controller:(*bool)(0xc00236eaaa), BlockOwnerDeletion:(*bool)(0xc00236eaab)}}
Dec 25 15:04:21.895: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"66db6fe9-1b72-4958-b6aa-f48207eb90b2", Controller:(*bool)(0xc00274fc22), BlockOwnerDeletion:(*bool)(0xc00274fc23)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 15:04:26.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3472" for this suite.
Dec 25 15:04:32.987: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 15:04:33.115: INFO: namespace gc-3472 deletion completed in 6.156320244s

• [SLOW TEST:11.548 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 15:04:33.115: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 25 15:04:33.217: INFO: Waiting up to 5m0s for pod "downwardapi-volume-888a16d3-2772-43ae-a082-0d91d9369ef8" in namespace "projected-3496" to be "success or failure"
Dec 25 15:04:33.228: INFO: Pod "downwardapi-volume-888a16d3-2772-43ae-a082-0d91d9369ef8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.285979ms
Dec 25 15:04:35.245: INFO: Pod "downwardapi-volume-888a16d3-2772-43ae-a082-0d91d9369ef8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027656014s
Dec 25 15:04:37.259: INFO: Pod "downwardapi-volume-888a16d3-2772-43ae-a082-0d91d9369ef8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041664244s
Dec 25 15:04:39.270: INFO: Pod "downwardapi-volume-888a16d3-2772-43ae-a082-0d91d9369ef8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05259281s
Dec 25 15:04:41.280: INFO: Pod "downwardapi-volume-888a16d3-2772-43ae-a082-0d91d9369ef8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.062568609s
Dec 25 15:04:43.290: INFO: Pod "downwardapi-volume-888a16d3-2772-43ae-a082-0d91d9369ef8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.072223702s
STEP: Saw pod success
Dec 25 15:04:43.290: INFO: Pod "downwardapi-volume-888a16d3-2772-43ae-a082-0d91d9369ef8" satisfied condition "success or failure"
Dec 25 15:04:43.295: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-888a16d3-2772-43ae-a082-0d91d9369ef8 container client-container: 
STEP: delete the pod
Dec 25 15:04:43.462: INFO: Waiting for pod downwardapi-volume-888a16d3-2772-43ae-a082-0d91d9369ef8 to disappear
Dec 25 15:04:43.471: INFO: Pod downwardapi-volume-888a16d3-2772-43ae-a082-0d91d9369ef8 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 15:04:43.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3496" for this suite.
Dec 25 15:04:49.504: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 15:04:49.646: INFO: namespace projected-3496 deletion completed in 6.168124342s

• [SLOW TEST:16.531 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 15:04:49.648: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test hostPath mode
Dec 25 15:04:49.844: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-5548" to be "success or failure"
Dec 25 15:04:49.867: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 22.864032ms
Dec 25 15:04:51.883: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03901387s
Dec 25 15:04:53.900: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055487483s
Dec 25 15:04:55.912: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.067451546s
Dec 25 15:04:57.927: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.082647567s
Dec 25 15:04:59.941: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.097180857s
Dec 25 15:05:01.949: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.105284516s
STEP: Saw pod success
Dec 25 15:05:01.950: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Dec 25 15:05:01.953: INFO: Trying to get logs from node iruya-node pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Dec 25 15:05:02.232: INFO: Waiting for pod pod-host-path-test to disappear
Dec 25 15:05:02.292: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 15:05:02.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-5548" for this suite.
Dec 25 15:05:08.351: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 15:05:08.453: INFO: namespace hostpath-5548 deletion completed in 6.154130047s

• [SLOW TEST:18.805 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 15:05:08.454: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Dec 25 15:05:08.565: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1332'
Dec 25 15:05:08.868: INFO: stderr: ""
Dec 25 15:05:08.868: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 25 15:05:08.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1332'
Dec 25 15:05:09.047: INFO: stderr: ""
Dec 25 15:05:09.047: INFO: stdout: "update-demo-nautilus-9jqwg update-demo-nautilus-jh7wz "
Dec 25 15:05:09.047: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9jqwg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1332'
Dec 25 15:05:09.229: INFO: stderr: ""
Dec 25 15:05:09.229: INFO: stdout: ""
Dec 25 15:05:09.229: INFO: update-demo-nautilus-9jqwg is created but not running
Dec 25 15:05:14.230: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1332'
Dec 25 15:05:15.980: INFO: stderr: ""
Dec 25 15:05:15.980: INFO: stdout: "update-demo-nautilus-9jqwg update-demo-nautilus-jh7wz "
Dec 25 15:05:15.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9jqwg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1332'
Dec 25 15:05:17.397: INFO: stderr: ""
Dec 25 15:05:17.397: INFO: stdout: ""
Dec 25 15:05:17.397: INFO: update-demo-nautilus-9jqwg is created but not running
Dec 25 15:05:22.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1332'
Dec 25 15:05:22.569: INFO: stderr: ""
Dec 25 15:05:22.569: INFO: stdout: "update-demo-nautilus-9jqwg update-demo-nautilus-jh7wz "
Dec 25 15:05:22.570: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9jqwg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1332'
Dec 25 15:05:22.672: INFO: stderr: ""
Dec 25 15:05:22.672: INFO: stdout: "true"
Dec 25 15:05:22.673: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9jqwg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1332'
Dec 25 15:05:22.808: INFO: stderr: ""
Dec 25 15:05:22.809: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 25 15:05:22.809: INFO: validating pod update-demo-nautilus-9jqwg
Dec 25 15:05:22.845: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 25 15:05:22.845: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 25 15:05:22.845: INFO: update-demo-nautilus-9jqwg is verified up and running
Dec 25 15:05:22.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jh7wz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1332'
Dec 25 15:05:22.941: INFO: stderr: ""
Dec 25 15:05:22.942: INFO: stdout: "true"
Dec 25 15:05:22.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jh7wz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1332'
Dec 25 15:05:23.077: INFO: stderr: ""
Dec 25 15:05:23.078: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 25 15:05:23.078: INFO: validating pod update-demo-nautilus-jh7wz
Dec 25 15:05:23.092: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 25 15:05:23.092: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 25 15:05:23.092: INFO: update-demo-nautilus-jh7wz is verified up and running
STEP: using delete to clean up resources
Dec 25 15:05:23.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1332'
Dec 25 15:05:23.224: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 25 15:05:23.225: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Dec 25 15:05:23.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1332'
Dec 25 15:05:23.354: INFO: stderr: "No resources found.\n"
Dec 25 15:05:23.354: INFO: stdout: ""
Dec 25 15:05:23.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1332 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 25 15:05:23.553: INFO: stderr: ""
Dec 25 15:05:23.553: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 15:05:23.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1332" for this suite.
Dec 25 15:05:45.663: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 15:05:45.902: INFO: namespace kubectl-1332 deletion completed in 22.276001811s

• [SLOW TEST:37.449 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 15:05:45.903: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-projected-all-test-volume-688af1f7-9c26-4857-92cc-a7f0fcb28826
STEP: Creating secret with name secret-projected-all-test-volume-2e4d999d-dcef-4263-b023-5aa6a547e908
STEP: Creating a pod to test Check all projections for projected volume plugin
Dec 25 15:05:46.093: INFO: Waiting up to 5m0s for pod "projected-volume-a228f445-29c9-4e83-bfd5-9c64e621a110" in namespace "projected-4083" to be "success or failure"
Dec 25 15:05:46.158: INFO: Pod "projected-volume-a228f445-29c9-4e83-bfd5-9c64e621a110": Phase="Pending", Reason="", readiness=false. Elapsed: 65.306479ms
Dec 25 15:05:48.169: INFO: Pod "projected-volume-a228f445-29c9-4e83-bfd5-9c64e621a110": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075929732s
Dec 25 15:05:50.175: INFO: Pod "projected-volume-a228f445-29c9-4e83-bfd5-9c64e621a110": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082008214s
Dec 25 15:05:52.182: INFO: Pod "projected-volume-a228f445-29c9-4e83-bfd5-9c64e621a110": Phase="Pending", Reason="", readiness=false. Elapsed: 6.089191434s
Dec 25 15:05:54.189: INFO: Pod "projected-volume-a228f445-29c9-4e83-bfd5-9c64e621a110": Phase="Pending", Reason="", readiness=false. Elapsed: 8.095638091s
Dec 25 15:05:56.195: INFO: Pod "projected-volume-a228f445-29c9-4e83-bfd5-9c64e621a110": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.101989597s
STEP: Saw pod success
Dec 25 15:05:56.195: INFO: Pod "projected-volume-a228f445-29c9-4e83-bfd5-9c64e621a110" satisfied condition "success or failure"
Dec 25 15:05:56.198: INFO: Trying to get logs from node iruya-node pod projected-volume-a228f445-29c9-4e83-bfd5-9c64e621a110 container projected-all-volume-test: 
STEP: delete the pod
Dec 25 15:05:56.256: INFO: Waiting for pod projected-volume-a228f445-29c9-4e83-bfd5-9c64e621a110 to disappear
Dec 25 15:05:56.272: INFO: Pod projected-volume-a228f445-29c9-4e83-bfd5-9c64e621a110 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 15:05:56.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4083" for this suite.
Dec 25 15:06:02.351: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 15:06:02.461: INFO: namespace projected-4083 deletion completed in 6.135169258s

• [SLOW TEST:16.559 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 15:06:02.462: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Dec 25 15:06:11.185: INFO: Successfully updated pod "annotationupdatedf89bd41-c516-41cf-9cfa-4313c062c0c8"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 15:06:13.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8664" for this suite.
Dec 25 15:06:35.300: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 15:06:35.442: INFO: namespace projected-8664 deletion completed in 22.164047431s

• [SLOW TEST:32.980 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 15:06:35.443: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Dec 25 15:06:35.567: INFO: Waiting up to 5m0s for pod "pod-dc35f22e-7cfd-4d01-ab27-68f2ffa55930" in namespace "emptydir-4556" to be "success or failure"
Dec 25 15:06:35.571: INFO: Pod "pod-dc35f22e-7cfd-4d01-ab27-68f2ffa55930": Phase="Pending", Reason="", readiness=false. Elapsed: 3.422265ms
Dec 25 15:06:37.581: INFO: Pod "pod-dc35f22e-7cfd-4d01-ab27-68f2ffa55930": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013298473s
Dec 25 15:06:39.595: INFO: Pod "pod-dc35f22e-7cfd-4d01-ab27-68f2ffa55930": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027627864s
Dec 25 15:06:41.604: INFO: Pod "pod-dc35f22e-7cfd-4d01-ab27-68f2ffa55930": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036412551s
Dec 25 15:06:43.619: INFO: Pod "pod-dc35f22e-7cfd-4d01-ab27-68f2ffa55930": Phase="Pending", Reason="", readiness=false. Elapsed: 8.051667464s
Dec 25 15:06:45.629: INFO: Pod "pod-dc35f22e-7cfd-4d01-ab27-68f2ffa55930": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.061576569s
STEP: Saw pod success
Dec 25 15:06:45.629: INFO: Pod "pod-dc35f22e-7cfd-4d01-ab27-68f2ffa55930" satisfied condition "success or failure"
Dec 25 15:06:45.634: INFO: Trying to get logs from node iruya-node pod pod-dc35f22e-7cfd-4d01-ab27-68f2ffa55930 container test-container: 
STEP: delete the pod
Dec 25 15:06:45.887: INFO: Waiting for pod pod-dc35f22e-7cfd-4d01-ab27-68f2ffa55930 to disappear
Dec 25 15:06:45.946: INFO: Pod pod-dc35f22e-7cfd-4d01-ab27-68f2ffa55930 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 15:06:45.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4556" for this suite.
Dec 25 15:06:52.005: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 15:06:52.216: INFO: namespace emptydir-4556 deletion completed in 6.262723379s

• [SLOW TEST:16.773 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 15:06:52.217: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating replication controller svc-latency-rc in namespace svc-latency-3673
I1225 15:06:52.334821       9 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-3673, replica count: 1
I1225 15:06:53.386022       9 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1225 15:06:54.386535       9 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1225 15:06:55.386963       9 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1225 15:06:56.387518       9 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1225 15:06:57.388050       9 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1225 15:06:58.388974       9 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1225 15:06:59.389834       9 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1225 15:07:00.390346       9 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1225 15:07:01.390801       9 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Dec 25 15:07:01.639: INFO: Created: latency-svc-sz47q
Dec 25 15:07:01.664: INFO: Got endpoints: latency-svc-sz47q [172.86117ms]
Dec 25 15:07:01.739: INFO: Created: latency-svc-hbr9v
Dec 25 15:07:01.888: INFO: Got endpoints: latency-svc-hbr9v [223.422476ms]
Dec 25 15:07:01.899: INFO: Created: latency-svc-zpb4c
Dec 25 15:07:01.924: INFO: Got endpoints: latency-svc-zpb4c [257.628762ms]
Dec 25 15:07:01.978: INFO: Created: latency-svc-8v4j9
Dec 25 15:07:02.063: INFO: Got endpoints: latency-svc-8v4j9 [396.579026ms]
Dec 25 15:07:02.078: INFO: Created: latency-svc-8hx5p
Dec 25 15:07:02.093: INFO: Got endpoints: latency-svc-8hx5p [428.100769ms]
Dec 25 15:07:02.149: INFO: Created: latency-svc-hbqjp
Dec 25 15:07:02.158: INFO: Got endpoints: latency-svc-hbqjp [491.534522ms]
Dec 25 15:07:02.253: INFO: Created: latency-svc-rh55l
Dec 25 15:07:02.259: INFO: Got endpoints: latency-svc-rh55l [592.39677ms]
Dec 25 15:07:02.344: INFO: Created: latency-svc-vt6gp
Dec 25 15:07:02.391: INFO: Got endpoints: latency-svc-vt6gp [726.431812ms]
Dec 25 15:07:02.408: INFO: Created: latency-svc-67sfv
Dec 25 15:07:02.419: INFO: Got endpoints: latency-svc-67sfv [753.072607ms]
Dec 25 15:07:02.465: INFO: Created: latency-svc-tr6vp
Dec 25 15:07:02.468: INFO: Got endpoints: latency-svc-tr6vp [801.463329ms]
Dec 25 15:07:02.579: INFO: Created: latency-svc-c8x5p
Dec 25 15:07:02.625: INFO: Created: latency-svc-hx9kf
Dec 25 15:07:02.625: INFO: Got endpoints: latency-svc-c8x5p [959.703715ms]
Dec 25 15:07:02.640: INFO: Got endpoints: latency-svc-hx9kf [973.868212ms]
Dec 25 15:07:02.764: INFO: Created: latency-svc-4b2g9
Dec 25 15:07:02.826: INFO: Got endpoints: latency-svc-4b2g9 [1.159448978s]
Dec 25 15:07:02.837: INFO: Created: latency-svc-95dlx
Dec 25 15:07:02.931: INFO: Got endpoints: latency-svc-95dlx [1.266433466s]
Dec 25 15:07:02.951: INFO: Created: latency-svc-854lf
Dec 25 15:07:02.967: INFO: Got endpoints: latency-svc-854lf [1.300723471s]
Dec 25 15:07:03.030: INFO: Created: latency-svc-dn5lj
Dec 25 15:07:03.030: INFO: Got endpoints: latency-svc-dn5lj [1.363578441s]
Dec 25 15:07:03.118: INFO: Created: latency-svc-qmxp7
Dec 25 15:07:03.127: INFO: Got endpoints: latency-svc-qmxp7 [1.238046202s]
Dec 25 15:07:03.193: INFO: Created: latency-svc-qzx4h
Dec 25 15:07:03.211: INFO: Got endpoints: latency-svc-qzx4h [1.286448458s]
Dec 25 15:07:03.371: INFO: Created: latency-svc-w6b7z
Dec 25 15:07:03.379: INFO: Got endpoints: latency-svc-w6b7z [1.31586181s]
Dec 25 15:07:03.424: INFO: Created: latency-svc-t447q
Dec 25 15:07:03.431: INFO: Got endpoints: latency-svc-t447q [1.337288126s]
Dec 25 15:07:03.544: INFO: Created: latency-svc-94r9v
Dec 25 15:07:03.550: INFO: Got endpoints: latency-svc-94r9v [1.392717782s]
Dec 25 15:07:03.559: INFO: Created: latency-svc-p6cz8
Dec 25 15:07:03.564: INFO: Got endpoints: latency-svc-p6cz8 [1.305613442s]
Dec 25 15:07:03.631: INFO: Created: latency-svc-htk8g
Dec 25 15:07:03.642: INFO: Got endpoints: latency-svc-htk8g [1.249632767s]
Dec 25 15:07:03.726: INFO: Created: latency-svc-5j9qg
Dec 25 15:07:03.737: INFO: Got endpoints: latency-svc-5j9qg [1.317525501s]
Dec 25 15:07:03.784: INFO: Created: latency-svc-h9zzq
Dec 25 15:07:03.944: INFO: Got endpoints: latency-svc-h9zzq [1.476180036s]
Dec 25 15:07:03.955: INFO: Created: latency-svc-vdgrh
Dec 25 15:07:03.968: INFO: Got endpoints: latency-svc-vdgrh [1.342449266s]
Dec 25 15:07:04.033: INFO: Created: latency-svc-c9vnv
Dec 25 15:07:04.097: INFO: Got endpoints: latency-svc-c9vnv [1.455944794s]
Dec 25 15:07:04.135: INFO: Created: latency-svc-crdrq
Dec 25 15:07:04.176: INFO: Got endpoints: latency-svc-crdrq [1.34958741s]
Dec 25 15:07:04.343: INFO: Created: latency-svc-wrs92
Dec 25 15:07:04.348: INFO: Got endpoints: latency-svc-wrs92 [1.415733913s]
Dec 25 15:07:04.412: INFO: Created: latency-svc-7mxk6
Dec 25 15:07:04.414: INFO: Got endpoints: latency-svc-7mxk6 [1.44598415s]
Dec 25 15:07:04.513: INFO: Created: latency-svc-m9jmb
Dec 25 15:07:04.526: INFO: Got endpoints: latency-svc-m9jmb [1.496066825s]
Dec 25 15:07:04.575: INFO: Created: latency-svc-tfp9g
Dec 25 15:07:04.596: INFO: Got endpoints: latency-svc-tfp9g [1.468758463s]
Dec 25 15:07:04.640: INFO: Created: latency-svc-6b57d
Dec 25 15:07:04.650: INFO: Got endpoints: latency-svc-6b57d [1.438631888s]
Dec 25 15:07:04.688: INFO: Created: latency-svc-ztdwc
Dec 25 15:07:04.699: INFO: Got endpoints: latency-svc-ztdwc [1.319413353s]
Dec 25 15:07:04.732: INFO: Created: latency-svc-8c8m2
Dec 25 15:07:04.804: INFO: Got endpoints: latency-svc-8c8m2 [1.372601688s]
Dec 25 15:07:04.841: INFO: Created: latency-svc-4nj9m
Dec 25 15:07:04.850: INFO: Got endpoints: latency-svc-4nj9m [1.299118882s]
Dec 25 15:07:04.902: INFO: Created: latency-svc-nn4rw
Dec 25 15:07:04.954: INFO: Got endpoints: latency-svc-nn4rw [1.389572078s]
Dec 25 15:07:05.024: INFO: Created: latency-svc-w8d8p
Dec 25 15:07:05.052: INFO: Got endpoints: latency-svc-w8d8p [1.409492025s]
Dec 25 15:07:05.159: INFO: Created: latency-svc-n6b9c
Dec 25 15:07:05.225: INFO: Got endpoints: latency-svc-n6b9c [1.487663989s]
Dec 25 15:07:05.252: INFO: Created: latency-svc-s2ctb
Dec 25 15:07:05.368: INFO: Got endpoints: latency-svc-s2ctb [1.422664029s]
Dec 25 15:07:05.419: INFO: Created: latency-svc-zwd64
Dec 25 15:07:05.436: INFO: Got endpoints: latency-svc-zwd64 [1.46810297s]
Dec 25 15:07:05.697: INFO: Created: latency-svc-mb5l9
Dec 25 15:07:05.701: INFO: Got endpoints: latency-svc-mb5l9 [1.60391744s]
Dec 25 15:07:05.738: INFO: Created: latency-svc-vd7pg
Dec 25 15:07:05.740: INFO: Got endpoints: latency-svc-vd7pg [1.563781393s]
Dec 25 15:07:05.874: INFO: Created: latency-svc-s6gzq
Dec 25 15:07:05.880: INFO: Got endpoints: latency-svc-s6gzq [1.532354103s]
Dec 25 15:07:05.937: INFO: Created: latency-svc-ddcvm
Dec 25 15:07:05.953: INFO: Got endpoints: latency-svc-ddcvm [1.539236445s]
Dec 25 15:07:06.039: INFO: Created: latency-svc-r58nj
Dec 25 15:07:06.076: INFO: Got endpoints: latency-svc-r58nj [1.548505061s]
Dec 25 15:07:06.080: INFO: Created: latency-svc-b4k5z
Dec 25 15:07:06.085: INFO: Got endpoints: latency-svc-b4k5z [1.488472161s]
Dec 25 15:07:06.303: INFO: Created: latency-svc-kbjgz
Dec 25 15:07:06.320: INFO: Got endpoints: latency-svc-kbjgz [1.669149147s]
Dec 25 15:07:06.375: INFO: Created: latency-svc-p9zdv
Dec 25 15:07:06.384: INFO: Got endpoints: latency-svc-p9zdv [1.685013216s]
Dec 25 15:07:06.489: INFO: Created: latency-svc-v2ltv
Dec 25 15:07:06.526: INFO: Got endpoints: latency-svc-v2ltv [1.721910635s]
Dec 25 15:07:06.536: INFO: Created: latency-svc-6xzgj
Dec 25 15:07:06.544: INFO: Got endpoints: latency-svc-6xzgj [1.693457146s]
Dec 25 15:07:06.633: INFO: Created: latency-svc-nnhb6
Dec 25 15:07:06.649: INFO: Got endpoints: latency-svc-nnhb6 [1.694070864s]
Dec 25 15:07:06.656: INFO: Created: latency-svc-7h2l6
Dec 25 15:07:06.666: INFO: Got endpoints: latency-svc-7h2l6 [1.613774516s]
Dec 25 15:07:06.695: INFO: Created: latency-svc-bq9kd
Dec 25 15:07:06.699: INFO: Got endpoints: latency-svc-bq9kd [1.473508409s]
Dec 25 15:07:06.765: INFO: Created: latency-svc-s8lnr
Dec 25 15:07:06.784: INFO: Got endpoints: latency-svc-s8lnr [1.415856492s]
Dec 25 15:07:06.834: INFO: Created: latency-svc-xxnvg
Dec 25 15:07:06.859: INFO: Got endpoints: latency-svc-xxnvg [1.42312943s]
Dec 25 15:07:06.942: INFO: Created: latency-svc-dlv6d
Dec 25 15:07:06.949: INFO: Got endpoints: latency-svc-dlv6d [1.247201216s]
Dec 25 15:07:06.993: INFO: Created: latency-svc-txstr
Dec 25 15:07:07.020: INFO: Got endpoints: latency-svc-txstr [1.279969404s]
Dec 25 15:07:07.087: INFO: Created: latency-svc-rc2kr
Dec 25 15:07:07.147: INFO: Got endpoints: latency-svc-rc2kr [1.266444781s]
Dec 25 15:07:07.163: INFO: Created: latency-svc-bsglh
Dec 25 15:07:07.328: INFO: Got endpoints: latency-svc-bsglh [1.374811878s]
Dec 25 15:07:07.331: INFO: Created: latency-svc-xw8nc
Dec 25 15:07:07.413: INFO: Got endpoints: latency-svc-xw8nc [1.336845587s]
Dec 25 15:07:07.428: INFO: Created: latency-svc-mmwp9
Dec 25 15:07:07.525: INFO: Got endpoints: latency-svc-mmwp9 [1.440132945s]
Dec 25 15:07:07.570: INFO: Created: latency-svc-wc75g
Dec 25 15:07:07.571: INFO: Got endpoints: latency-svc-wc75g [1.250641166s]
Dec 25 15:07:07.632: INFO: Created: latency-svc-tp4jz
Dec 25 15:07:07.705: INFO: Got endpoints: latency-svc-tp4jz [1.319895024s]
Dec 25 15:07:07.710: INFO: Created: latency-svc-dx6lz
Dec 25 15:07:07.716: INFO: Got endpoints: latency-svc-dx6lz [1.189270577s]
Dec 25 15:07:07.766: INFO: Created: latency-svc-zllmm
Dec 25 15:07:07.836: INFO: Got endpoints: latency-svc-zllmm [1.292204043s]
Dec 25 15:07:07.878: INFO: Created: latency-svc-9ck6j
Dec 25 15:07:07.880: INFO: Got endpoints: latency-svc-9ck6j [1.230281413s]
Dec 25 15:07:07.938: INFO: Created: latency-svc-w72lh
Dec 25 15:07:08.013: INFO: Got endpoints: latency-svc-w72lh [1.346333741s]
Dec 25 15:07:08.032: INFO: Created: latency-svc-k2kpt
Dec 25 15:07:08.039: INFO: Got endpoints: latency-svc-k2kpt [1.33903672s]
Dec 25 15:07:08.089: INFO: Created: latency-svc-44t6w
Dec 25 15:07:08.090: INFO: Got endpoints: latency-svc-44t6w [1.305587834s]
Dec 25 15:07:08.181: INFO: Created: latency-svc-5kjmt
Dec 25 15:07:08.188: INFO: Got endpoints: latency-svc-5kjmt [1.327600089s]
Dec 25 15:07:08.229: INFO: Created: latency-svc-k4xvx
Dec 25 15:07:08.244: INFO: Got endpoints: latency-svc-k4xvx [1.294399269s]
Dec 25 15:07:08.376: INFO: Created: latency-svc-5cffv
Dec 25 15:07:08.377: INFO: Got endpoints: latency-svc-5cffv [1.356197024s]
Dec 25 15:07:08.436: INFO: Created: latency-svc-8sqmf
Dec 25 15:07:08.449: INFO: Got endpoints: latency-svc-8sqmf [1.301682492s]
Dec 25 15:07:08.569: INFO: Created: latency-svc-h6788
Dec 25 15:07:08.575: INFO: Got endpoints: latency-svc-h6788 [1.24710948s]
Dec 25 15:07:08.627: INFO: Created: latency-svc-2w6tw
Dec 25 15:07:08.692: INFO: Got endpoints: latency-svc-2w6tw [1.278340033s]
Dec 25 15:07:08.728: INFO: Created: latency-svc-q6vdz
Dec 25 15:07:08.748: INFO: Got endpoints: latency-svc-q6vdz [1.222280676s]
Dec 25 15:07:08.856: INFO: Created: latency-svc-67chc
Dec 25 15:07:08.858: INFO: Got endpoints: latency-svc-67chc [1.286710473s]
Dec 25 15:07:08.931: INFO: Created: latency-svc-zkm7m
Dec 25 15:07:08.977: INFO: Got endpoints: latency-svc-zkm7m [1.271763827s]
Dec 25 15:07:09.007: INFO: Created: latency-svc-4brq5
Dec 25 15:07:09.048: INFO: Got endpoints: latency-svc-4brq5 [1.331607022s]
Dec 25 15:07:09.057: INFO: Created: latency-svc-b7scr
Dec 25 15:07:09.125: INFO: Got endpoints: latency-svc-b7scr [1.288042768s]
Dec 25 15:07:09.135: INFO: Created: latency-svc-rhlql
Dec 25 15:07:09.157: INFO: Got endpoints: latency-svc-rhlql [1.27688583s]
Dec 25 15:07:09.200: INFO: Created: latency-svc-9zvpb
Dec 25 15:07:09.203: INFO: Got endpoints: latency-svc-9zvpb [1.189606612s]
Dec 25 15:07:09.384: INFO: Created: latency-svc-26j4c
Dec 25 15:07:09.394: INFO: Got endpoints: latency-svc-26j4c [1.355046391s]
Dec 25 15:07:09.525: INFO: Created: latency-svc-s8mbb
Dec 25 15:07:09.530: INFO: Got endpoints: latency-svc-s8mbb [1.439387564s]
Dec 25 15:07:09.593: INFO: Created: latency-svc-2tsnr
Dec 25 15:07:09.601: INFO: Got endpoints: latency-svc-2tsnr [1.413121744s]
Dec 25 15:07:09.709: INFO: Created: latency-svc-pxl4n
Dec 25 15:07:09.768: INFO: Got endpoints: latency-svc-pxl4n [1.523911291s]
Dec 25 15:07:09.770: INFO: Created: latency-svc-rbcbm
Dec 25 15:07:09.833: INFO: Got endpoints: latency-svc-rbcbm [1.455719864s]
Dec 25 15:07:09.837: INFO: Created: latency-svc-d9khh
Dec 25 15:07:09.838: INFO: Got endpoints: latency-svc-d9khh [1.388057772s]
Dec 25 15:07:09.894: INFO: Created: latency-svc-bjk6v
Dec 25 15:07:09.902: INFO: Got endpoints: latency-svc-bjk6v [1.326050482s]
Dec 25 15:07:10.008: INFO: Created: latency-svc-z6q6r
Dec 25 15:07:10.015: INFO: Got endpoints: latency-svc-z6q6r [1.322336882s]
Dec 25 15:07:10.173: INFO: Created: latency-svc-rrtpz
Dec 25 15:07:10.186: INFO: Created: latency-svc-pjk7m
Dec 25 15:07:10.187: INFO: Got endpoints: latency-svc-rrtpz [1.437875428s]
Dec 25 15:07:10.209: INFO: Got endpoints: latency-svc-pjk7m [1.350598243s]
Dec 25 15:07:10.274: INFO: Created: latency-svc-l2g5x
Dec 25 15:07:10.377: INFO: Got endpoints: latency-svc-l2g5x [1.400221481s]
Dec 25 15:07:10.386: INFO: Created: latency-svc-x87k6
Dec 25 15:07:10.386: INFO: Got endpoints: latency-svc-x87k6 [1.338002481s]
Dec 25 15:07:10.421: INFO: Created: latency-svc-hq66x
Dec 25 15:07:10.441: INFO: Got endpoints: latency-svc-hq66x [1.31619087s]
Dec 25 15:07:10.545: INFO: Created: latency-svc-6ml8b
Dec 25 15:07:10.560: INFO: Got endpoints: latency-svc-6ml8b [1.403231848s]
Dec 25 15:07:10.613: INFO: Created: latency-svc-kk2v9
Dec 25 15:07:10.701: INFO: Created: latency-svc-hhg6k
Dec 25 15:07:10.707: INFO: Got endpoints: latency-svc-kk2v9 [1.504028723s]
Dec 25 15:07:10.769: INFO: Got endpoints: latency-svc-hhg6k [1.374965935s]
Dec 25 15:07:10.781: INFO: Created: latency-svc-vj2bm
Dec 25 15:07:10.865: INFO: Got endpoints: latency-svc-vj2bm [1.334580048s]
Dec 25 15:07:10.899: INFO: Created: latency-svc-7g4qk
Dec 25 15:07:10.908: INFO: Got endpoints: latency-svc-7g4qk [1.30701232s]
Dec 25 15:07:10.969: INFO: Created: latency-svc-bw7ds
Dec 25 15:07:11.028: INFO: Got endpoints: latency-svc-bw7ds [1.259315945s]
Dec 25 15:07:11.056: INFO: Created: latency-svc-bmzrr
Dec 25 15:07:11.068: INFO: Got endpoints: latency-svc-bmzrr [1.234330001s]
Dec 25 15:07:11.236: INFO: Created: latency-svc-nrs98
Dec 25 15:07:11.277: INFO: Got endpoints: latency-svc-nrs98 [1.439144457s]
Dec 25 15:07:11.286: INFO: Created: latency-svc-kv4nc
Dec 25 15:07:11.301: INFO: Got endpoints: latency-svc-kv4nc [1.398943168s]
Dec 25 15:07:11.493: INFO: Created: latency-svc-tcp85
Dec 25 15:07:11.524: INFO: Got endpoints: latency-svc-tcp85 [1.508288335s]
Dec 25 15:07:11.751: INFO: Created: latency-svc-cbrdx
Dec 25 15:07:11.805: INFO: Got endpoints: latency-svc-cbrdx [1.618092943s]
Dec 25 15:07:11.806: INFO: Created: latency-svc-vkz9k
Dec 25 15:07:11.962: INFO: Got endpoints: latency-svc-vkz9k [1.752844218s]
Dec 25 15:07:11.970: INFO: Created: latency-svc-c5jfs
Dec 25 15:07:11.995: INFO: Got endpoints: latency-svc-c5jfs [1.617048471s]
Dec 25 15:07:12.035: INFO: Created: latency-svc-qrptl
Dec 25 15:07:12.042: INFO: Got endpoints: latency-svc-qrptl [1.655289176s]
Dec 25 15:07:12.208: INFO: Created: latency-svc-44lxc
Dec 25 15:07:12.263: INFO: Got endpoints: latency-svc-44lxc [1.821440446s]
Dec 25 15:07:12.295: INFO: Created: latency-svc-wmr2g
Dec 25 15:07:12.377: INFO: Got endpoints: latency-svc-wmr2g [1.815845649s]
Dec 25 15:07:12.444: INFO: Created: latency-svc-9jlcx
Dec 25 15:07:12.463: INFO: Got endpoints: latency-svc-9jlcx [1.755477954s]
Dec 25 15:07:12.590: INFO: Created: latency-svc-2nl7t
Dec 25 15:07:12.600: INFO: Got endpoints: latency-svc-2nl7t [1.829800099s]
Dec 25 15:07:12.638: INFO: Created: latency-svc-ll2rk
Dec 25 15:07:12.644: INFO: Got endpoints: latency-svc-ll2rk [1.779070193s]
Dec 25 15:07:12.737: INFO: Created: latency-svc-8fb6v
Dec 25 15:07:12.779: INFO: Got endpoints: latency-svc-8fb6v [1.87004513s]
Dec 25 15:07:12.783: INFO: Created: latency-svc-9sb5m
Dec 25 15:07:12.791: INFO: Got endpoints: latency-svc-9sb5m [1.762427621s]
Dec 25 15:07:12.830: INFO: Created: latency-svc-lzf5v
Dec 25 15:07:12.974: INFO: Got endpoints: latency-svc-lzf5v [1.906029942s]
Dec 25 15:07:12.990: INFO: Created: latency-svc-r9jmt
Dec 25 15:07:13.001: INFO: Got endpoints: latency-svc-r9jmt [1.723424512s]
Dec 25 15:07:13.034: INFO: Created: latency-svc-bpgcn
Dec 25 15:07:13.044: INFO: Got endpoints: latency-svc-bpgcn [1.742388618s]
Dec 25 15:07:13.139: INFO: Created: latency-svc-ngw7m
Dec 25 15:07:13.223: INFO: Got endpoints: latency-svc-ngw7m [1.698627741s]
Dec 25 15:07:13.226: INFO: Created: latency-svc-gx77x
Dec 25 15:07:13.233: INFO: Got endpoints: latency-svc-gx77x [1.427951649s]
Dec 25 15:07:13.378: INFO: Created: latency-svc-qjr6d
Dec 25 15:07:13.394: INFO: Got endpoints: latency-svc-qjr6d [1.431486483s]
Dec 25 15:07:13.530: INFO: Created: latency-svc-fth7h
Dec 25 15:07:13.536: INFO: Got endpoints: latency-svc-fth7h [1.540611206s]
Dec 25 15:07:13.593: INFO: Created: latency-svc-xx5t4
Dec 25 15:07:13.594: INFO: Got endpoints: latency-svc-xx5t4 [1.551515117s]
Dec 25 15:07:13.707: INFO: Created: latency-svc-nb9zn
Dec 25 15:07:13.719: INFO: Got endpoints: latency-svc-nb9zn [1.454759574s]
Dec 25 15:07:13.771: INFO: Created: latency-svc-hmdfn
Dec 25 15:07:13.915: INFO: Created: latency-svc-jb785
Dec 25 15:07:13.921: INFO: Got endpoints: latency-svc-hmdfn [1.543830999s]
Dec 25 15:07:13.936: INFO: Got endpoints: latency-svc-jb785 [1.472856162s]
Dec 25 15:07:14.001: INFO: Created: latency-svc-r969k
Dec 25 15:07:14.175: INFO: Got endpoints: latency-svc-r969k [1.574429594s]
Dec 25 15:07:14.261: INFO: Created: latency-svc-4mmbn
Dec 25 15:07:14.261: INFO: Got endpoints: latency-svc-4mmbn [1.616758149s]
Dec 25 15:07:14.434: INFO: Created: latency-svc-mbwww
Dec 25 15:07:14.444: INFO: Got endpoints: latency-svc-mbwww [1.664380429s]
Dec 25 15:07:14.548: INFO: Created: latency-svc-t4dnb
Dec 25 15:07:14.548: INFO: Got endpoints: latency-svc-t4dnb [1.757048293s]
Dec 25 15:07:14.655: INFO: Created: latency-svc-r6qm9
Dec 25 15:07:14.662: INFO: Got endpoints: latency-svc-r6qm9 [1.687979695s]
Dec 25 15:07:14.720: INFO: Created: latency-svc-jmkvx
Dec 25 15:07:14.788: INFO: Got endpoints: latency-svc-jmkvx [1.787051121s]
Dec 25 15:07:14.805: INFO: Created: latency-svc-2m6fc
Dec 25 15:07:14.806: INFO: Got endpoints: latency-svc-2m6fc [1.761467038s]
Dec 25 15:07:14.840: INFO: Created: latency-svc-xg4ps
Dec 25 15:07:14.860: INFO: Got endpoints: latency-svc-xg4ps [1.636210685s]
Dec 25 15:07:14.956: INFO: Created: latency-svc-np9qp
Dec 25 15:07:14.998: INFO: Created: latency-svc-5tnxl
Dec 25 15:07:14.998: INFO: Got endpoints: latency-svc-np9qp [1.764130208s]
Dec 25 15:07:15.028: INFO: Got endpoints: latency-svc-5tnxl [1.633831915s]
Dec 25 15:07:15.136: INFO: Created: latency-svc-h6sfb
Dec 25 15:07:15.182: INFO: Got endpoints: latency-svc-h6sfb [1.646046277s]
Dec 25 15:07:15.209: INFO: Created: latency-svc-t57rs
Dec 25 15:07:15.209: INFO: Got endpoints: latency-svc-t57rs [1.615123376s]
Dec 25 15:07:15.361: INFO: Created: latency-svc-jt252
Dec 25 15:07:15.367: INFO: Got endpoints: latency-svc-jt252 [1.648203322s]
Dec 25 15:07:15.406: INFO: Created: latency-svc-t85hk
Dec 25 15:07:15.450: INFO: Got endpoints: latency-svc-t85hk [1.52784021s]
Dec 25 15:07:15.604: INFO: Created: latency-svc-ch986
Dec 25 15:07:15.646: INFO: Got endpoints: latency-svc-ch986 [1.709510942s]
Dec 25 15:07:15.649: INFO: Created: latency-svc-6q78d
Dec 25 15:07:15.685: INFO: Got endpoints: latency-svc-6q78d [1.509628391s]
Dec 25 15:07:15.807: INFO: Created: latency-svc-hx4f2
Dec 25 15:07:15.820: INFO: Got endpoints: latency-svc-hx4f2 [1.559023145s]
Dec 25 15:07:15.867: INFO: Created: latency-svc-kvscr
Dec 25 15:07:15.880: INFO: Got endpoints: latency-svc-kvscr [1.435930424s]
Dec 25 15:07:15.969: INFO: Created: latency-svc-2qfj5
Dec 25 15:07:15.972: INFO: Got endpoints: latency-svc-2qfj5 [1.423508939s]
Dec 25 15:07:16.013: INFO: Created: latency-svc-qd6n9
Dec 25 15:07:16.016: INFO: Got endpoints: latency-svc-qd6n9 [1.353964512s]
Dec 25 15:07:16.055: INFO: Created: latency-svc-2shfp
Dec 25 15:07:16.154: INFO: Got endpoints: latency-svc-2shfp [1.365299348s]
Dec 25 15:07:16.219: INFO: Created: latency-svc-75b2z
Dec 25 15:07:16.220: INFO: Got endpoints: latency-svc-75b2z [1.413814581s]
Dec 25 15:07:16.388: INFO: Created: latency-svc-2lxbh
Dec 25 15:07:16.390: INFO: Got endpoints: latency-svc-2lxbh [1.529867962s]
Dec 25 15:07:16.479: INFO: Created: latency-svc-v6rht
Dec 25 15:07:16.545: INFO: Got endpoints: latency-svc-v6rht [1.546756409s]
Dec 25 15:07:16.594: INFO: Created: latency-svc-68ll7
Dec 25 15:07:16.595: INFO: Got endpoints: latency-svc-68ll7 [1.566413289s]
Dec 25 15:07:16.668: INFO: Created: latency-svc-k6mbk
Dec 25 15:07:16.696: INFO: Got endpoints: latency-svc-k6mbk [1.51290142s]
Dec 25 15:07:16.777: INFO: Created: latency-svc-cd6m4
Dec 25 15:07:16.782: INFO: Got endpoints: latency-svc-cd6m4 [1.572558112s]
Dec 25 15:07:16.879: INFO: Created: latency-svc-z29mh
Dec 25 15:07:16.894: INFO: Got endpoints: latency-svc-z29mh [1.52641354s]
Dec 25 15:07:16.945: INFO: Created: latency-svc-8s4kv
Dec 25 15:07:17.040: INFO: Got endpoints: latency-svc-8s4kv [1.589805401s]
Dec 25 15:07:17.052: INFO: Created: latency-svc-p5x22
Dec 25 15:07:17.053: INFO: Got endpoints: latency-svc-p5x22 [1.406531297s]
Dec 25 15:07:17.094: INFO: Created: latency-svc-8m2zm
Dec 25 15:07:17.116: INFO: Got endpoints: latency-svc-8m2zm [1.43117606s]
Dec 25 15:07:17.215: INFO: Created: latency-svc-ghhkw
Dec 25 15:07:17.216: INFO: Got endpoints: latency-svc-ghhkw [1.395877354s]
Dec 25 15:07:17.382: INFO: Created: latency-svc-tc7hg
Dec 25 15:07:17.396: INFO: Got endpoints: latency-svc-tc7hg [1.515460374s]
Dec 25 15:07:17.456: INFO: Created: latency-svc-2bjnp
Dec 25 15:07:17.468: INFO: Got endpoints: latency-svc-2bjnp [1.495855453s]
Dec 25 15:07:17.646: INFO: Created: latency-svc-67ggk
Dec 25 15:07:17.670: INFO: Got endpoints: latency-svc-67ggk [1.653789724s]
Dec 25 15:07:17.775: INFO: Created: latency-svc-jqm99
Dec 25 15:07:17.818: INFO: Created: latency-svc-9qddj
Dec 25 15:07:17.819: INFO: Got endpoints: latency-svc-jqm99 [1.663962856s]
Dec 25 15:07:17.832: INFO: Got endpoints: latency-svc-9qddj [1.612322143s]
Dec 25 15:07:17.958: INFO: Created: latency-svc-mmk9h
Dec 25 15:07:18.005: INFO: Got endpoints: latency-svc-mmk9h [1.614822088s]
Dec 25 15:07:18.030: INFO: Created: latency-svc-fz84n
Dec 25 15:07:18.031: INFO: Got endpoints: latency-svc-fz84n [1.485405313s]
Dec 25 15:07:18.118: INFO: Created: latency-svc-mzrvk
Dec 25 15:07:18.136: INFO: Got endpoints: latency-svc-mzrvk [1.540776778s]
Dec 25 15:07:18.194: INFO: Created: latency-svc-q4m7g
Dec 25 15:07:18.341: INFO: Got endpoints: latency-svc-q4m7g [1.644196515s]
Dec 25 15:07:18.349: INFO: Created: latency-svc-htlh2
Dec 25 15:07:18.382: INFO: Got endpoints: latency-svc-htlh2 [1.600101526s]
Dec 25 15:07:18.385: INFO: Created: latency-svc-hs6fc
Dec 25 15:07:18.396: INFO: Got endpoints: latency-svc-hs6fc [1.501341s]
Dec 25 15:07:18.523: INFO: Created: latency-svc-tjg4v
Dec 25 15:07:18.568: INFO: Got endpoints: latency-svc-tjg4v [1.527527923s]
Dec 25 15:07:18.569: INFO: Created: latency-svc-kvr74
Dec 25 15:07:18.596: INFO: Got endpoints: latency-svc-kvr74 [1.54311915s]
Dec 25 15:07:18.714: INFO: Created: latency-svc-jdvzm
Dec 25 15:07:18.894: INFO: Created: latency-svc-pcrgt
Dec 25 15:07:18.895: INFO: Got endpoints: latency-svc-jdvzm [1.778230344s]
Dec 25 15:07:18.960: INFO: Got endpoints: latency-svc-pcrgt [1.743286193s]
Dec 25 15:07:18.971: INFO: Created: latency-svc-2wdzq
Dec 25 15:07:19.163: INFO: Created: latency-svc-mg4fm
Dec 25 15:07:19.164: INFO: Got endpoints: latency-svc-2wdzq [1.767069896s]
Dec 25 15:07:19.365: INFO: Got endpoints: latency-svc-mg4fm [1.896871446s]
Dec 25 15:07:19.373: INFO: Created: latency-svc-p2ddj
Dec 25 15:07:19.408: INFO: Got endpoints: latency-svc-p2ddj [1.737494055s]
Dec 25 15:07:19.605: INFO: Created: latency-svc-6ggzf
Dec 25 15:07:19.613: INFO: Got endpoints: latency-svc-6ggzf [1.793702758s]
Dec 25 15:07:19.869: INFO: Created: latency-svc-sfb6b
Dec 25 15:07:19.890: INFO: Got endpoints: latency-svc-sfb6b [2.057880006s]
Dec 25 15:07:19.965: INFO: Created: latency-svc-v6rl5
Dec 25 15:07:20.134: INFO: Got endpoints: latency-svc-v6rl5 [2.12791896s]
Dec 25 15:07:20.219: INFO: Created: latency-svc-6zcn2
Dec 25 15:07:20.353: INFO: Got endpoints: latency-svc-6zcn2 [2.321502422s]
Dec 25 15:07:20.358: INFO: Created: latency-svc-bjnvp
Dec 25 15:07:20.367: INFO: Got endpoints: latency-svc-bjnvp [2.231247583s]
Dec 25 15:07:20.564: INFO: Created: latency-svc-nn22x
Dec 25 15:07:20.564: INFO: Got endpoints: latency-svc-nn22x [2.222840584s]
Dec 25 15:07:20.607: INFO: Created: latency-svc-rt6dj
Dec 25 15:07:20.691: INFO: Got endpoints: latency-svc-rt6dj [2.308800705s]
Dec 25 15:07:20.704: INFO: Created: latency-svc-ptstj
Dec 25 15:07:20.710: INFO: Got endpoints: latency-svc-ptstj [2.313557835s]
Dec 25 15:07:20.774: INFO: Created: latency-svc-tj5m2
Dec 25 15:07:20.774: INFO: Got endpoints: latency-svc-tj5m2 [2.205239822s]
Dec 25 15:07:20.858: INFO: Created: latency-svc-vmzz7
Dec 25 15:07:20.868: INFO: Got endpoints: latency-svc-vmzz7 [2.270795789s]
Dec 25 15:07:20.920: INFO: Created: latency-svc-k4hq5
Dec 25 15:07:20.933: INFO: Got endpoints: latency-svc-k4hq5 [2.037468991s]
Dec 25 15:07:21.011: INFO: Created: latency-svc-rd7wt
Dec 25 15:07:21.040: INFO: Got endpoints: latency-svc-rd7wt [2.078891288s]
Dec 25 15:07:21.076: INFO: Created: latency-svc-z92sb
Dec 25 15:07:21.145: INFO: Got endpoints: latency-svc-z92sb [1.98150753s]
Dec 25 15:07:21.158: INFO: Created: latency-svc-4j966
Dec 25 15:07:21.159: INFO: Got endpoints: latency-svc-4j966 [1.791715374s]
Dec 25 15:07:21.203: INFO: Created: latency-svc-n7r5l
Dec 25 15:07:21.233: INFO: Got endpoints: latency-svc-n7r5l [1.82415622s]
Dec 25 15:07:21.332: INFO: Created: latency-svc-g4w4r
Dec 25 15:07:21.384: INFO: Created: latency-svc-lk6dq
Dec 25 15:07:21.385: INFO: Got endpoints: latency-svc-lk6dq [1.494203466s]
Dec 25 15:07:21.385: INFO: Got endpoints: latency-svc-g4w4r [1.771607753s]
Dec 25 15:07:21.560: INFO: Created: latency-svc-zjqf4
Dec 25 15:07:21.571: INFO: Got endpoints: latency-svc-zjqf4 [1.436272344s]
Dec 25 15:07:21.701: INFO: Created: latency-svc-zsjv9
Dec 25 15:07:21.715: INFO: Got endpoints: latency-svc-zsjv9 [1.361773275s]
Dec 25 15:07:21.768: INFO: Created: latency-svc-dzm6m
Dec 25 15:07:21.768: INFO: Got endpoints: latency-svc-dzm6m [1.400736166s]
Dec 25 15:07:21.862: INFO: Created: latency-svc-rqrj4
Dec 25 15:07:21.916: INFO: Got endpoints: latency-svc-rqrj4 [1.351470989s]
Dec 25 15:07:21.920: INFO: Created: latency-svc-mgljd
Dec 25 15:07:21.932: INFO: Got endpoints: latency-svc-mgljd [1.239641204s]
Dec 25 15:07:22.036: INFO: Created: latency-svc-4mpzm
Dec 25 15:07:22.052: INFO: Got endpoints: latency-svc-4mpzm [1.341959163s]
Dec 25 15:07:22.052: INFO: Latencies: [223.422476ms 257.628762ms 396.579026ms 428.100769ms 491.534522ms 592.39677ms 726.431812ms 753.072607ms 801.463329ms 959.703715ms 973.868212ms 1.159448978s 1.189270577s 1.189606612s 1.222280676s 1.230281413s 1.234330001s 1.238046202s 1.239641204s 1.24710948s 1.247201216s 1.249632767s 1.250641166s 1.259315945s 1.266433466s 1.266444781s 1.271763827s 1.27688583s 1.278340033s 1.279969404s 1.286448458s 1.286710473s 1.288042768s 1.292204043s 1.294399269s 1.299118882s 1.300723471s 1.301682492s 1.305587834s 1.305613442s 1.30701232s 1.31586181s 1.31619087s 1.317525501s 1.319413353s 1.319895024s 1.322336882s 1.326050482s 1.327600089s 1.331607022s 1.334580048s 1.336845587s 1.337288126s 1.338002481s 1.33903672s 1.341959163s 1.342449266s 1.346333741s 1.34958741s 1.350598243s 1.351470989s 1.353964512s 1.355046391s 1.356197024s 1.361773275s 1.363578441s 1.365299348s 1.372601688s 1.374811878s 1.374965935s 1.388057772s 1.389572078s 1.392717782s 1.395877354s 1.398943168s 1.400221481s 1.400736166s 1.403231848s 1.406531297s 1.409492025s 1.413121744s 1.413814581s 1.415733913s 1.415856492s 1.422664029s 1.42312943s 1.423508939s 1.427951649s 1.43117606s 1.431486483s 1.435930424s 1.436272344s 1.437875428s 1.438631888s 1.439144457s 1.439387564s 1.440132945s 1.44598415s 1.454759574s 1.455719864s 1.455944794s 1.46810297s 1.468758463s 1.472856162s 1.473508409s 1.476180036s 1.485405313s 1.487663989s 1.488472161s 1.494203466s 1.495855453s 1.496066825s 1.501341s 1.504028723s 1.508288335s 1.509628391s 1.51290142s 1.515460374s 1.523911291s 1.52641354s 1.527527923s 1.52784021s 1.529867962s 1.532354103s 1.539236445s 1.540611206s 1.540776778s 1.54311915s 1.543830999s 1.546756409s 1.548505061s 1.551515117s 1.559023145s 1.563781393s 1.566413289s 1.572558112s 1.574429594s 1.589805401s 1.600101526s 1.60391744s 1.612322143s 1.613774516s 1.614822088s 1.615123376s 1.616758149s 1.617048471s 1.618092943s 1.633831915s 1.636210685s 1.644196515s 1.646046277s 1.648203322s 1.653789724s 1.655289176s 1.663962856s 1.664380429s 1.669149147s 1.685013216s 1.687979695s 1.693457146s 1.694070864s 1.698627741s 1.709510942s 1.721910635s 1.723424512s 1.737494055s 1.742388618s 1.743286193s 1.752844218s 1.755477954s 1.757048293s 1.761467038s 1.762427621s 1.764130208s 1.767069896s 1.771607753s 1.778230344s 1.779070193s 1.787051121s 1.791715374s 1.793702758s 1.815845649s 1.821440446s 1.82415622s 1.829800099s 1.87004513s 1.896871446s 1.906029942s 1.98150753s 2.037468991s 2.057880006s 2.078891288s 2.12791896s 2.205239822s 2.222840584s 2.231247583s 2.270795789s 2.308800705s 2.313557835s 2.321502422s]
Dec 25 15:07:22.053: INFO: 50 %ile: 1.455944794s
Dec 25 15:07:22.053: INFO: 90 %ile: 1.793702758s
Dec 25 15:07:22.053: INFO: 99 %ile: 2.313557835s
Dec 25 15:07:22.053: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 15:07:22.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-3673" for this suite.
Dec 25 15:08:04.090: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 15:08:04.192: INFO: namespace svc-latency-3673 deletion completed in 42.127050073s

• [SLOW TEST:71.976 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 15:08:04.193: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-ncdcq in namespace proxy-7685
I1225 15:08:04.372366       9 runners.go:180] Created replication controller with name: proxy-service-ncdcq, namespace: proxy-7685, replica count: 1
I1225 15:08:05.425083       9 runners.go:180] proxy-service-ncdcq Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1225 15:08:06.425473       9 runners.go:180] proxy-service-ncdcq Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1225 15:08:07.425999       9 runners.go:180] proxy-service-ncdcq Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1225 15:08:08.426486       9 runners.go:180] proxy-service-ncdcq Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1225 15:08:09.426893       9 runners.go:180] proxy-service-ncdcq Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1225 15:08:10.427572       9 runners.go:180] proxy-service-ncdcq Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1225 15:08:11.428772       9 runners.go:180] proxy-service-ncdcq Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1225 15:08:12.429803       9 runners.go:180] proxy-service-ncdcq Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1225 15:08:13.430452       9 runners.go:180] proxy-service-ncdcq Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1225 15:08:14.430980       9 runners.go:180] proxy-service-ncdcq Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1225 15:08:15.431677       9 runners.go:180] proxy-service-ncdcq Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1225 15:08:16.432192       9 runners.go:180] proxy-service-ncdcq Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Dec 25 15:08:16.443: INFO: setup took 12.123256747s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Dec 25 15:08:16.489: INFO: (0) /api/v1/namespaces/proxy-7685/pods/proxy-service-ncdcq-qrpjd/proxy/: test (200; 44.581345ms)
Dec 25 15:08:16.489: INFO: (0) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-ncdcq-qrpjd:160/proxy/: foo (200; 44.923125ms)
Dec 25 15:08:16.489: INFO: (0) /api/v1/namespaces/proxy-7685/pods/proxy-service-ncdcq-qrpjd:160/proxy/: foo (200; 45.015179ms)
Dec 25 15:08:16.497: INFO: (0) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-ncdcq-qrpjd:162/proxy/: bar (200; 53.126795ms)
Dec 25 15:08:16.498: INFO: (0) /api/v1/namespaces/proxy-7685/services/proxy-service-ncdcq:portname2/proxy/: bar (200; 53.840348ms)
Dec 25 15:08:16.498: INFO: (0) /api/v1/namespaces/proxy-7685/services/http:proxy-service-ncdcq:portname2/proxy/: bar (200; 54.41851ms)
Dec 25 15:08:16.498: INFO: (0) /api/v1/namespaces/proxy-7685/services/proxy-service-ncdcq:portname1/proxy/: foo (200; 54.149711ms)
Dec 25 15:08:16.498: INFO: (0) /api/v1/namespaces/proxy-7685/pods/proxy-service-ncdcq-qrpjd:162/proxy/: bar (200; 53.877587ms)
Dec 25 15:08:16.499: INFO: (0) /api/v1/namespaces/proxy-7685/services/http:proxy-service-ncdcq:portname1/proxy/: foo (200; 54.559619ms)
Dec 25 15:08:16.499: INFO: (0) /api/v1/namespaces/proxy-7685/pods/proxy-service-ncdcq-qrpjd:1080/proxy/: test<... (200; 54.130144ms)
Dec 25 15:08:16.499: INFO: (0) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-ncdcq-qrpjd:1080/proxy/: ... (200; 54.045208ms)
Dec 25 15:08:16.523: INFO: (0) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-ncdcq-qrpjd:462/proxy/: tls qux (200; 77.736923ms)
Dec 25 15:08:16.523: INFO: (0) /api/v1/namespaces/proxy-7685/services/https:proxy-service-ncdcq:tlsportname2/proxy/: tls qux (200; 79.098333ms)
Dec 25 15:08:16.525: INFO: (0) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-ncdcq-qrpjd:443/proxy/: test (200; 16.722926ms)
Dec 25 15:08:16.551: INFO: (1) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-ncdcq-qrpjd:1080/proxy/: ... (200; 22.434213ms)
Dec 25 15:08:16.552: INFO: (1) /api/v1/namespaces/proxy-7685/pods/proxy-service-ncdcq-qrpjd:162/proxy/: bar (200; 23.169597ms)
Dec 25 15:08:16.552: INFO: (1) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-ncdcq-qrpjd:160/proxy/: foo (200; 23.203074ms)
Dec 25 15:08:16.552: INFO: (1) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-ncdcq-qrpjd:443/proxy/: test<... (200; 24.587659ms)
Dec 25 15:08:16.553: INFO: (1) /api/v1/namespaces/proxy-7685/pods/proxy-service-ncdcq-qrpjd:160/proxy/: foo (200; 23.864753ms)
Dec 25 15:08:16.554: INFO: (1) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-ncdcq-qrpjd:162/proxy/: bar (200; 24.720276ms)
Dec 25 15:08:16.554: INFO: (1) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-ncdcq-qrpjd:460/proxy/: tls baz (200; 24.950699ms)
Dec 25 15:08:16.554: INFO: (1) /api/v1/namespaces/proxy-7685/services/proxy-service-ncdcq:portname2/proxy/: bar (200; 25.88009ms)
Dec 25 15:08:16.557: INFO: (1) /api/v1/namespaces/proxy-7685/services/http:proxy-service-ncdcq:portname2/proxy/: bar (200; 27.737656ms)
Dec 25 15:08:16.559: INFO: (1) /api/v1/namespaces/proxy-7685/services/http:proxy-service-ncdcq:portname1/proxy/: foo (200; 30.288608ms)
Dec 25 15:08:16.559: INFO: (1) /api/v1/namespaces/proxy-7685/services/https:proxy-service-ncdcq:tlsportname2/proxy/: tls qux (200; 30.207284ms)
Dec 25 15:08:16.559: INFO: (1) /api/v1/namespaces/proxy-7685/services/https:proxy-service-ncdcq:tlsportname1/proxy/: tls baz (200; 30.227762ms)
Dec 25 15:08:16.559: INFO: (1) /api/v1/namespaces/proxy-7685/services/proxy-service-ncdcq:portname1/proxy/: foo (200; 30.848827ms)
Dec 25 15:08:16.581: INFO: (2) /api/v1/namespaces/proxy-7685/pods/proxy-service-ncdcq-qrpjd:162/proxy/: bar (200; 20.847035ms)
Dec 25 15:08:16.582: INFO: (2) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-ncdcq-qrpjd:162/proxy/: bar (200; 21.77804ms)
Dec 25 15:08:16.583: INFO: (2) /api/v1/namespaces/proxy-7685/pods/proxy-service-ncdcq-qrpjd:1080/proxy/: test<... (200; 23.302592ms)
Dec 25 15:08:16.583: INFO: (2) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-ncdcq-qrpjd:443/proxy/: ... (200; 22.965506ms)
Dec 25 15:08:16.583: INFO: (2) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-ncdcq-qrpjd:160/proxy/: foo (200; 23.573209ms)
Dec 25 15:08:16.583: INFO: (2) /api/v1/namespaces/proxy-7685/pods/proxy-service-ncdcq-qrpjd/proxy/: test (200; 23.61685ms)
Dec 25 15:08:16.583: INFO: (2) /api/v1/namespaces/proxy-7685/services/https:proxy-service-ncdcq:tlsportname2/proxy/: tls qux (200; 23.972858ms)
Dec 25 15:08:16.586: INFO: (2) /api/v1/namespaces/proxy-7685/services/http:proxy-service-ncdcq:portname1/proxy/: foo (200; 25.94626ms)
Dec 25 15:08:16.588: INFO: (2) /api/v1/namespaces/proxy-7685/services/proxy-service-ncdcq:portname2/proxy/: bar (200; 28.220196ms)
Dec 25 15:08:16.588: INFO: (2) /api/v1/namespaces/proxy-7685/services/proxy-service-ncdcq:portname1/proxy/: foo (200; 28.3774ms)
Dec 25 15:08:16.590: INFO: (2) /api/v1/namespaces/proxy-7685/services/https:proxy-service-ncdcq:tlsportname1/proxy/: tls baz (200; 30.195333ms)
Dec 25 15:08:16.594: INFO: (2) /api/v1/namespaces/proxy-7685/services/http:proxy-service-ncdcq:portname2/proxy/: bar (200; 33.79434ms)
Dec 25 15:08:16.621: INFO: (3) /api/v1/namespaces/proxy-7685/pods/proxy-service-ncdcq-qrpjd/proxy/: test (200; 26.110116ms)
Dec 25 15:08:16.623: INFO: (3) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-ncdcq-qrpjd:162/proxy/: bar (200; 27.823626ms)
Dec 25 15:08:16.623: INFO: (3) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-ncdcq-qrpjd:1080/proxy/: ... (200; 27.420043ms)
Dec 25 15:08:16.623: INFO: (3) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-ncdcq-qrpjd:160/proxy/: foo (200; 28.041056ms)
Dec 25 15:08:16.624: INFO: (3) /api/v1/namespaces/proxy-7685/pods/proxy-service-ncdcq-qrpjd:160/proxy/: foo (200; 29.385446ms)
Dec 25 15:08:16.625: INFO: (3) /api/v1/namespaces/proxy-7685/pods/proxy-service-ncdcq-qrpjd:1080/proxy/: test<... (200; 30.181749ms)
Dec 25 15:08:16.627: INFO: (3) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-ncdcq-qrpjd:460/proxy/: tls baz (200; 32.943555ms)
Dec 25 15:08:16.628: INFO: (3) /api/v1/namespaces/proxy-7685/services/http:proxy-service-ncdcq:portname1/proxy/: foo (200; 32.902258ms)
Dec 25 15:08:16.628: INFO: (3) /api/v1/namespaces/proxy-7685/services/http:proxy-service-ncdcq:portname2/proxy/: bar (200; 32.451173ms)
Dec 25 15:08:16.628: INFO: (3) /api/v1/namespaces/proxy-7685/pods/proxy-service-ncdcq-qrpjd:162/proxy/: bar (200; 32.898918ms)
Dec 25 15:08:16.636: INFO: (3) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-ncdcq-qrpjd:443/proxy/: test (200; 19.205461ms)
Dec 25 15:08:16.664: INFO: (4) /api/v1/namespaces/proxy-7685/pods/proxy-service-ncdcq-qrpjd:162/proxy/: bar (200; 19.721381ms)
Dec 25 15:08:16.664: INFO: (4) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-ncdcq-qrpjd:162/proxy/: bar (200; 19.682442ms)
Dec 25 15:08:16.664: INFO: (4) /api/v1/namespaces/proxy-7685/pods/proxy-service-ncdcq-qrpjd:160/proxy/: foo (200; 19.472366ms)
Dec 25 15:08:16.664: INFO: (4) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-ncdcq-qrpjd:1080/proxy/: ... (200; 19.44952ms)
Dec 25 15:08:16.664: INFO: (4) /api/v1/namespaces/proxy-7685/pods/proxy-service-ncdcq-qrpjd:1080/proxy/: test<... (200; 19.97569ms)
Dec 25 15:08:16.666: INFO: (4) /api/v1/namespaces/proxy-7685/services/http:proxy-service-ncdcq:portname1/proxy/: foo (200; 21.451789ms)
Dec 25 15:08:16.666: INFO: (4) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-ncdcq-qrpjd:160/proxy/: foo (200; 21.824636ms)
Dec 25 15:08:16.667: INFO: (4) /api/v1/namespaces/proxy-7685/services/https:proxy-service-ncdcq:tlsportname2/proxy/: tls qux (200; 22.260049ms)
Dec 25 15:08:16.667: INFO: (4) /api/v1/namespaces/proxy-7685/services/proxy-service-ncdcq:portname2/proxy/: bar (200; 23.163209ms)
Dec 25 15:08:16.667: INFO: (4) /api/v1/namespaces/proxy-7685/services/http:proxy-service-ncdcq:portname2/proxy/: bar (200; 22.804786ms)
Dec 25 15:08:16.667: INFO: (4) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-ncdcq-qrpjd:443/proxy/: test<... (200; 4.226327ms)
Dec 25 15:08:16.680: INFO: (5) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-ncdcq-qrpjd:462/proxy/: tls qux (200; 11.98195ms)
Dec 25 15:08:16.680: INFO: (5) /api/v1/namespaces/proxy-7685/pods/proxy-service-ncdcq-qrpjd:162/proxy/: bar (200; 12.234579ms)
Dec 25 15:08:16.680: INFO: (5) /api/v1/namespaces/proxy-7685/pods/proxy-service-ncdcq-qrpjd:160/proxy/: foo (200; 12.228076ms)
Dec 25 15:08:16.680: INFO: (5) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-ncdcq-qrpjd:443/proxy/: test (200; 12.286477ms)
Dec 25 15:08:16.682: INFO: (5) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-ncdcq-qrpjd:160/proxy/: foo (200; 13.768513ms)
Dec 25 15:08:16.684: INFO: (5) /api/v1/namespaces/proxy-7685/services/proxy-service-ncdcq:portname1/proxy/: foo (200; 16.413882ms)
Dec 25 15:08:16.685: INFO: (5) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-ncdcq-qrpjd:162/proxy/: bar (200; 16.644809ms)
Dec 25 15:08:16.685: INFO: (5) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-ncdcq-qrpjd:1080/proxy/: ... (200; 16.645224ms)
Dec 25 15:08:16.685: INFO: (5) /api/v1/namespaces/proxy-7685/services/http:proxy-service-ncdcq:portname2/proxy/: bar (200; 16.718182ms)
Dec 25 15:08:16.685: INFO: (5) /api/v1/namespaces/proxy-7685/services/https:proxy-service-ncdcq:tlsportname1/proxy/: tls baz (200; 16.820196ms)
Dec 25 15:08:16.685: INFO: (5) /api/v1/namespaces/proxy-7685/services/https:proxy-service-ncdcq:tlsportname2/proxy/: tls qux (200; 16.795731ms)
Dec 25 15:08:16.685: INFO: (5) /api/v1/namespaces/proxy-7685/services/http:proxy-service-ncdcq:portname1/proxy/: foo (200; 16.799044ms)
Dec 25 15:08:16.685: INFO: (5) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-ncdcq-qrpjd:460/proxy/: tls baz (200; 17.273229ms)
Dec 25 15:08:16.686: INFO: (5) /api/v1/namespaces/proxy-7685/services/proxy-service-ncdcq:portname2/proxy/: bar (200; 17.535248ms)
Dec 25 15:08:16.694: INFO: (6) /api/v1/namespaces/proxy-7685/pods/proxy-service-ncdcq-qrpjd:160/proxy/: foo (200; 7.79557ms)
Dec 25 15:08:16.694: INFO: (6) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-ncdcq-qrpjd:1080/proxy/: ... (200; 7.890653ms)
Dec 25 15:08:16.695: INFO: (6) /api/v1/namespaces/proxy-7685/pods/proxy-service-ncdcq-qrpjd:162/proxy/: bar (200; 8.41543ms)
Dec 25 15:08:16.695: INFO: (6) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-ncdcq-qrpjd:460/proxy/: tls baz (200; 8.777535ms)
Dec 25 15:08:16.695: INFO: (6) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-ncdcq-qrpjd:160/proxy/: foo (200; 8.926201ms)
Dec 25 15:08:16.695: INFO: (6) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-ncdcq-qrpjd:462/proxy/: tls qux (200; 8.599381ms)
Dec 25 15:08:16.695: INFO: (6) /api/v1/namespaces/proxy-7685/pods/proxy-service-ncdcq-qrpjd/proxy/: test (200; 8.553985ms)
Dec 25 15:08:16.695: INFO: (6) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-ncdcq-qrpjd:162/proxy/: bar (200; 8.961087ms)
Dec 25 15:08:16.695: INFO: (6) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-ncdcq-qrpjd:443/proxy/: test<... (200; 9.317862ms)
Dec 25 15:08:16.698: INFO: (6) /api/v1/namespaces/proxy-7685/services/proxy-service-ncdcq:portname1/proxy/: foo (200; 12.495209ms)
Dec 25 15:08:16.698: INFO: (6) /api/v1/namespaces/proxy-7685/services/proxy-service-ncdcq:portname2/proxy/: bar (200; 12.177718ms)
Dec 25 15:08:16.699: INFO: (6) /api/v1/namespaces/proxy-7685/services/http:proxy-service-ncdcq:portname2/proxy/: bar (200; 12.484526ms)
Dec 25 15:08:16.699: INFO: (6) /api/v1/namespaces/proxy-7685/services/http:proxy-service-ncdcq:portname1/proxy/: foo (200; 12.881994ms)
Dec 25 15:08:16.699: INFO: (6) /api/v1/namespaces/proxy-7685/services/https:proxy-service-ncdcq:tlsportname1/proxy/: tls baz (200; 12.957911ms)
Dec 25 15:08:16.699: INFO: (6) /api/v1/namespaces/proxy-7685/services/https:proxy-service-ncdcq:tlsportname2/proxy/: tls qux (200; 12.540907ms)
Dec 25 15:08:16.708: INFO: (7) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-ncdcq-qrpjd:162/proxy/: bar (200; 8.67441ms)
Dec 25 15:08:16.711: INFO: (7) /api/v1/namespaces/proxy-7685/services/proxy-service-ncdcq:portname1/proxy/: foo (200; 11.855419ms)
Dec 25 15:08:16.712: INFO: (7) /api/v1/namespaces/proxy-7685/pods/proxy-service-ncdcq-qrpjd:162/proxy/: bar (200; 12.639796ms)
Dec 25 15:08:16.712: INFO: (7) /api/v1/namespaces/proxy-7685/services/http:proxy-service-ncdcq:portname2/proxy/: bar (200; 12.820356ms)
Dec 25 15:08:16.712: INFO: (7) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-ncdcq-qrpjd:443/proxy/: ... (200; 13.037395ms)
Dec 25 15:08:16.712: INFO: (7) /api/v1/namespaces/proxy-7685/services/https:proxy-service-ncdcq:tlsportname1/proxy/: tls baz (200; 13.022919ms)
Dec 25 15:08:16.712: INFO: (7) /api/v1/namespaces/proxy-7685/pods/proxy-service-ncdcq-qrpjd:1080/proxy/: test<... (200; 13.123714ms)
Dec 25 15:08:16.712: INFO: (7) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-ncdcq-qrpjd:462/proxy/: tls qux (200; 13.171732ms)
Dec 25 15:08:16.712: INFO: (7) /api/v1/namespaces/proxy-7685/pods/proxy-service-ncdcq-qrpjd/proxy/: test (200; 12.94326ms)
Dec 25 15:08:16.713: INFO: (7) /api/v1/namespaces/proxy-7685/services/proxy-service-ncdcq:portname2/proxy/: bar (200; 13.941737ms)
Dec 25 15:08:16.713: INFO: (7) /api/v1/namespaces/proxy-7685/pods/proxy-service-ncdcq-qrpjd:160/proxy/: foo (200; 14.196723ms)
Dec 25 15:08:16.713: INFO: (7) /api/v1/namespaces/proxy-7685/services/http:proxy-service-ncdcq:portname1/proxy/: foo (200; 13.974624ms)
Dec 25 15:08:16.722: INFO: (8) /api/v1/namespaces/proxy-7685/pods/proxy-service-ncdcq-qrpjd/proxy/: test (200; 8.946883ms)
Dec 25 15:08:16.722: INFO: (8) /api/v1/namespaces/proxy-7685/pods/proxy-service-ncdcq-qrpjd:1080/proxy/: test<... (200; 9.13974ms)
Dec 25 15:08:16.722: INFO: (8) /api/v1/namespaces/proxy-7685/pods/proxy-service-ncdcq-qrpjd:160/proxy/: foo (200; 9.103242ms)
Dec 25 15:08:16.722: INFO: (8) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-ncdcq-qrpjd:443/proxy/: ... (200; 10.156461ms)
Dec 25 15:08:16.724: INFO: (8) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-ncdcq-qrpjd:160/proxy/: foo (200; 10.584635ms)
Dec 25 15:08:16.725: INFO: (8) /api/v1/namespaces/proxy-7685/services/http:proxy-service-ncdcq:portname1/proxy/: foo (200; 11.553064ms)
Dec 25 15:08:16.725: INFO: (8) /api/v1/namespaces/proxy-7685/services/https:proxy-service-ncdcq:tlsportname1/proxy/: tls baz (200; 11.8777ms)
Dec 25 15:08:16.725: INFO: (8) /api/v1/namespaces/proxy-7685/services/https:proxy-service-ncdcq:tlsportname2/proxy/: tls qux (200; 11.86776ms)
Dec 25 15:08:16.725: INFO: (8) /api/v1/namespaces/proxy-7685/services/http:proxy-service-ncdcq:portname2/proxy/: bar (200; 11.88327ms)
Dec 25 15:08:16.726: INFO: (8) /api/v1/namespaces/proxy-7685/services/proxy-service-ncdcq:portname1/proxy/: foo (200; 12.497727ms)
Dec 25 15:08:16.727: INFO: (8) /api/v1/namespaces/proxy-7685/services/proxy-service-ncdcq:portname2/proxy/: bar (200; 13.776551ms)
Dec 25 15:08:16.738: INFO: (9) /api/v1/namespaces/proxy-7685/pods/proxy-service-ncdcq-qrpjd/proxy/: test (200; 10.894834ms)
Dec 25 15:08:16.739: INFO: (9) /api/v1/namespaces/proxy-7685/pods/proxy-service-ncdcq-qrpjd:1080/proxy/: test<... (200; 11.01579ms)
Dec 25 15:08:16.739: INFO: (9) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-ncdcq-qrpjd:160/proxy/: foo (200; 11.090686ms)
Dec 25 15:08:16.739: INFO: (9) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-ncdcq-qrpjd:1080/proxy/: ... (200; 11.604724ms)
Dec 25 15:08:16.739: INFO: (9) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-ncdcq-qrpjd:460/proxy/: tls baz (200; 11.692622ms)
Dec 25 15:08:16.739: INFO: (9) /api/v1/namespaces/proxy-7685/pods/proxy-service-ncdcq-qrpjd:160/proxy/: foo (200; 11.362186ms)
Dec 25 15:08:16.739: INFO: (9) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-ncdcq-qrpjd:462/proxy/: tls qux (200; 12.02554ms)
Dec 25 15:08:16.740: INFO: (9) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-ncdcq-qrpjd:162/proxy/: bar (200; 12.082722ms)
Dec 25 15:08:16.743: INFO: (9) /api/v1/namespaces/proxy-7685/pods/proxy-service-ncdcq-qrpjd:162/proxy/: bar (200; 14.984005ms)
Dec 25 15:08:16.745: INFO: (9) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-ncdcq-qrpjd:443/proxy/: ... (200; 15.662357ms)
Dec 25 15:08:16.766: INFO: (10) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-ncdcq-qrpjd:462/proxy/: tls qux (200; 15.55141ms)
Dec 25 15:08:16.766: INFO: (10) /api/v1/namespaces/proxy-7685/pods/proxy-service-ncdcq-qrpjd:1080/proxy/: test<... (200; 15.64436ms)
Dec 25 15:08:16.767: INFO: (10) /api/v1/namespaces/proxy-7685/services/https:proxy-service-ncdcq:tlsportname1/proxy/: tls baz (200; 15.697676ms)
Dec 25 15:08:16.767: INFO: (10) /api/v1/namespaces/proxy-7685/services/https:proxy-service-ncdcq:tlsportname2/proxy/: tls qux (200; 15.974282ms)
Dec 25 15:08:16.767: INFO: (10) /api/v1/namespaces/proxy-7685/services/http:proxy-service-ncdcq:portname2/proxy/: bar (200; 15.788032ms)
Dec 25 15:08:16.767: INFO: (10) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-ncdcq-qrpjd:443/proxy/: test (200; 16.034468ms)
Dec 25 15:08:16.771: INFO: (10) /api/v1/namespaces/proxy-7685/pods/proxy-service-ncdcq-qrpjd:162/proxy/: bar (200; 20.031762ms)
Dec 25 15:08:16.775: INFO: (11) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-ncdcq-qrpjd:1080/proxy/: ... (200; 4.089461ms)
Dec 25 15:08:16.776: INFO: (11) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-ncdcq-qrpjd:443/proxy/: test (200; 18.374411ms)
Dec 25 15:08:16.790: INFO: (11) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-ncdcq-qrpjd:162/proxy/: bar (200; 18.624851ms)
Dec 25 15:08:16.790: INFO: (11) /api/v1/namespaces/proxy-7685/pods/proxy-service-ncdcq-qrpjd:1080/proxy/: test<... (200; 18.493835ms)
Dec 25 15:08:16.790: INFO: (11) /api/v1/namespaces/proxy-7685/services/proxy-service-ncdcq:portname2/proxy/: bar (200; 18.627293ms)
Dec 25 15:08:16.790: INFO: (11) /api/v1/namespaces/proxy-7685/pods/proxy-service-ncdcq-qrpjd:160/proxy/: foo (200; 18.797295ms)
Dec 25 15:08:16.791: INFO: (11) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-ncdcq-qrpjd:160/proxy/: foo (200; 19.808901ms)
Dec 25 15:08:16.791: INFO: (11) /api/v1/namespaces/proxy-7685/pods/proxy-service-ncdcq-qrpjd:162/proxy/: bar (200; 19.93263ms)
Dec 25 15:08:16.791: INFO: (11) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-ncdcq-qrpjd:462/proxy/: tls qux (200; 19.852164ms)
Dec 25 15:08:16.797: INFO: (11) /api/v1/namespaces/proxy-7685/services/proxy-service-ncdcq:portname1/proxy/: foo (200; 26.153083ms)
Dec 25 15:08:16.798: INFO: (11) /api/v1/namespaces/proxy-7685/services/http:proxy-service-ncdcq:portname1/proxy/: foo (200; 26.618749ms)
Dec 25 15:08:16.798: INFO: (11) /api/v1/namespaces/proxy-7685/services/https:proxy-service-ncdcq:tlsportname2/proxy/: tls qux (200; 26.592134ms)
Dec 25 15:08:16.798: INFO: (11) /api/v1/namespaces/proxy-7685/services/http:proxy-service-ncdcq:portname2/proxy/: bar (200; 26.464261ms)
Dec 25 15:08:16.798: INFO: (11) /api/v1/namespaces/proxy-7685/services/https:proxy-service-ncdcq:tlsportname1/proxy/: tls baz (200; 27.18239ms)
Dec 25 15:08:16.817: INFO: (12) /api/v1/namespaces/proxy-7685/pods/proxy-service-ncdcq-qrpjd/proxy/: test (200; 18.814719ms)
Dec 25 15:08:16.817: INFO: (12) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-ncdcq-qrpjd:162/proxy/: bar (200; 18.893073ms)
Dec 25 15:08:16.817: INFO: (12) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-ncdcq-qrpjd:1080/proxy/: ... (200; 19.306675ms)
Dec 25 15:08:16.817: INFO: (12) /api/v1/namespaces/proxy-7685/pods/proxy-service-ncdcq-qrpjd:1080/proxy/: test<... (200; 19.117629ms)
Dec 25 15:08:16.817: INFO: (12) /api/v1/namespaces/proxy-7685/pods/proxy-service-ncdcq-qrpjd:160/proxy/: foo (200; 19.019613ms)
Dec 25 15:08:16.818: INFO: (12) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-ncdcq-qrpjd:443/proxy/: test<... (200; 24.988553ms)
Dec 25 15:08:16.857: INFO: (13) /api/v1/namespaces/proxy-7685/pods/proxy-service-ncdcq-qrpjd/proxy/: test (200; 25.068496ms)
Dec 25 15:08:16.857: INFO: (13) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-ncdcq-qrpjd:1080/proxy/: ... (200; 25.226982ms)
Dec 25 15:08:16.857: INFO: (13) /api/v1/namespaces/proxy-7685/services/https:proxy-service-ncdcq:tlsportname2/proxy/: tls qux (200; 25.281405ms)
Dec 25 15:08:16.857: INFO: (13) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-ncdcq-qrpjd:462/proxy/: tls qux (200; 25.750214ms)
Dec 25 15:08:16.858: INFO: (13) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-ncdcq-qrpjd:443/proxy/: test (200; 18.358724ms)
Dec 25 15:08:16.880: INFO: (14) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-ncdcq-qrpjd:1080/proxy/: ... (200; 18.424204ms)
Dec 25 15:08:16.880: INFO: (14) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-ncdcq-qrpjd:443/proxy/: test<... (200; 17.972364ms)
Dec 25 15:08:16.881: INFO: (14) /api/v1/namespaces/proxy-7685/pods/proxy-service-ncdcq-qrpjd:160/proxy/: foo (200; 18.270169ms)
Dec 25 15:08:16.881: INFO: (14) /api/v1/namespaces/proxy-7685/services/http:proxy-service-ncdcq:portname2/proxy/: bar (200; 18.406335ms)
Dec 25 15:08:16.881: INFO: (14) /api/v1/namespaces/proxy-7685/services/proxy-service-ncdcq:portname1/proxy/: foo (200; 18.165519ms)
Dec 25 15:08:16.881: INFO: (14) /api/v1/namespaces/proxy-7685/services/proxy-service-ncdcq:portname2/proxy/: bar (200; 18.97158ms)
Dec 25 15:08:16.899: INFO: (15) /api/v1/namespaces/proxy-7685/pods/proxy-service-ncdcq-qrpjd:1080/proxy/: test<... (200; 16.872248ms)
Dec 25 15:08:16.900: INFO: (15) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-ncdcq-qrpjd:460/proxy/: tls baz (200; 18.13554ms)
Dec 25 15:08:16.900: INFO: (15) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-ncdcq-qrpjd:462/proxy/: tls qux (200; 17.609556ms)
Dec 25 15:08:16.900: INFO: (15) /api/v1/namespaces/proxy-7685/pods/proxy-service-ncdcq-qrpjd/proxy/: test (200; 18.163895ms)
Dec 25 15:08:16.900: INFO: (15) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-ncdcq-qrpjd:1080/proxy/: ... (200; 17.927317ms)
Dec 25 15:08:16.900: INFO: (15) /api/v1/namespaces/proxy-7685/services/https:proxy-service-ncdcq:tlsportname2/proxy/: tls qux (200; 18.45882ms)
Dec 25 15:08:16.902: INFO: (15) /api/v1/namespaces/proxy-7685/services/http:proxy-service-ncdcq:portname2/proxy/: bar (200; 20.012744ms)
Dec 25 15:08:16.902: INFO: (15) /api/v1/namespaces/proxy-7685/pods/proxy-service-ncdcq-qrpjd:162/proxy/: bar (200; 20.270373ms)
Dec 25 15:08:16.902: INFO: (15) /api/v1/namespaces/proxy-7685/services/proxy-service-ncdcq:portname1/proxy/: foo (200; 20.53192ms)
Dec 25 15:08:16.903: INFO: (15) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-ncdcq-qrpjd:443/proxy/: test (200; 10.673484ms)
Dec 25 15:08:16.918: INFO: (16) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-ncdcq-qrpjd:1080/proxy/: ... (200; 11.384925ms)
Dec 25 15:08:16.918: INFO: (16) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-ncdcq-qrpjd:162/proxy/: bar (200; 11.476863ms)
Dec 25 15:08:16.918: INFO: (16) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-ncdcq-qrpjd:443/proxy/: test<... (200; 14.97833ms)
Dec 25 15:08:16.922: INFO: (16) /api/v1/namespaces/proxy-7685/services/proxy-service-ncdcq:portname1/proxy/: foo (200; 15.057185ms)
Dec 25 15:08:16.922: INFO: (16) /api/v1/namespaces/proxy-7685/services/https:proxy-service-ncdcq:tlsportname2/proxy/: tls qux (200; 15.029603ms)
Dec 25 15:08:16.922: INFO: (16) /api/v1/namespaces/proxy-7685/services/http:proxy-service-ncdcq:portname1/proxy/: foo (200; 15.369645ms)
Dec 25 15:08:16.922: INFO: (16) /api/v1/namespaces/proxy-7685/services/https:proxy-service-ncdcq:tlsportname1/proxy/: tls baz (200; 15.383913ms)
Dec 25 15:08:16.923: INFO: (16) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-ncdcq-qrpjd:160/proxy/: foo (200; 15.696085ms)
Dec 25 15:08:16.927: INFO: (17) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-ncdcq-qrpjd:443/proxy/: test<... (200; 6.93125ms)
Dec 25 15:08:16.930: INFO: (17) /api/v1/namespaces/proxy-7685/pods/proxy-service-ncdcq-qrpjd:162/proxy/: bar (200; 7.165238ms)
Dec 25 15:08:16.930: INFO: (17) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-ncdcq-qrpjd:1080/proxy/: ... (200; 7.536481ms)
Dec 25 15:08:16.931: INFO: (17) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-ncdcq-qrpjd:160/proxy/: foo (200; 7.641799ms)
Dec 25 15:08:16.938: INFO: (17) /api/v1/namespaces/proxy-7685/services/proxy-service-ncdcq:portname2/proxy/: bar (200; 15.195995ms)
Dec 25 15:08:16.938: INFO: (17) /api/v1/namespaces/proxy-7685/services/proxy-service-ncdcq:portname1/proxy/: foo (200; 15.319682ms)
Dec 25 15:08:16.938: INFO: (17) /api/v1/namespaces/proxy-7685/pods/proxy-service-ncdcq-qrpjd/proxy/: test (200; 15.216094ms)
Dec 25 15:08:16.938: INFO: (17) /api/v1/namespaces/proxy-7685/services/https:proxy-service-ncdcq:tlsportname2/proxy/: tls qux (200; 15.476231ms)
Dec 25 15:08:16.938: INFO: (17) /api/v1/namespaces/proxy-7685/services/http:proxy-service-ncdcq:portname1/proxy/: foo (200; 15.434449ms)
Dec 25 15:08:16.939: INFO: (17) /api/v1/namespaces/proxy-7685/services/http:proxy-service-ncdcq:portname2/proxy/: bar (200; 15.802678ms)
Dec 25 15:08:16.939: INFO: (17) /api/v1/namespaces/proxy-7685/services/https:proxy-service-ncdcq:tlsportname1/proxy/: tls baz (200; 16.084586ms)
Dec 25 15:08:16.939: INFO: (17) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-ncdcq-qrpjd:460/proxy/: tls baz (200; 15.888811ms)
Dec 25 15:08:16.939: INFO: (17) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-ncdcq-qrpjd:162/proxy/: bar (200; 15.94939ms)
Dec 25 15:08:16.952: INFO: (18) /api/v1/namespaces/proxy-7685/services/http:proxy-service-ncdcq:portname2/proxy/: bar (200; 12.607111ms)
Dec 25 15:08:16.955: INFO: (18) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-ncdcq-qrpjd:162/proxy/: bar (200; 16.000145ms)
Dec 25 15:08:16.957: INFO: (18) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-ncdcq-qrpjd:160/proxy/: foo (200; 18.416204ms)
Dec 25 15:08:16.958: INFO: (18) /api/v1/namespaces/proxy-7685/services/proxy-service-ncdcq:portname2/proxy/: bar (200; 18.108497ms)
Dec 25 15:08:16.958: INFO: (18) /api/v1/namespaces/proxy-7685/services/https:proxy-service-ncdcq:tlsportname1/proxy/: tls baz (200; 18.130642ms)
Dec 25 15:08:16.958: INFO: (18) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-ncdcq-qrpjd:462/proxy/: tls qux (200; 18.399955ms)
Dec 25 15:08:16.958: INFO: (18) /api/v1/namespaces/proxy-7685/services/http:proxy-service-ncdcq:portname1/proxy/: foo (200; 18.748918ms)
Dec 25 15:08:16.958: INFO: (18) /api/v1/namespaces/proxy-7685/pods/proxy-service-ncdcq-qrpjd:1080/proxy/: test<... (200; 18.905621ms)
Dec 25 15:08:16.958: INFO: (18) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-ncdcq-qrpjd:1080/proxy/: ... (200; 18.673126ms)
Dec 25 15:08:16.958: INFO: (18) /api/v1/namespaces/proxy-7685/services/proxy-service-ncdcq:portname1/proxy/: foo (200; 18.846498ms)
Dec 25 15:08:16.958: INFO: (18) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-ncdcq-qrpjd:460/proxy/: tls baz (200; 18.647234ms)
Dec 25 15:08:16.958: INFO: (18) /api/v1/namespaces/proxy-7685/pods/proxy-service-ncdcq-qrpjd:160/proxy/: foo (200; 19.036634ms)
Dec 25 15:08:16.958: INFO: (18) /api/v1/namespaces/proxy-7685/services/https:proxy-service-ncdcq:tlsportname2/proxy/: tls qux (200; 18.792354ms)
Dec 25 15:08:16.959: INFO: (18) /api/v1/namespaces/proxy-7685/pods/proxy-service-ncdcq-qrpjd:162/proxy/: bar (200; 19.748509ms)
Dec 25 15:08:16.959: INFO: (18) /api/v1/namespaces/proxy-7685/pods/proxy-service-ncdcq-qrpjd/proxy/: test (200; 19.749787ms)
Dec 25 15:08:16.961: INFO: (18) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-ncdcq-qrpjd:443/proxy/: ... (200; 23.399198ms)
Dec 25 15:08:16.985: INFO: (19) /api/v1/namespaces/proxy-7685/pods/proxy-service-ncdcq-qrpjd:160/proxy/: foo (200; 23.794653ms)
Dec 25 15:08:16.986: INFO: (19) /api/v1/namespaces/proxy-7685/pods/proxy-service-ncdcq-qrpjd/proxy/: test (200; 23.876663ms)
Dec 25 15:08:16.987: INFO: (19) /api/v1/namespaces/proxy-7685/pods/proxy-service-ncdcq-qrpjd:1080/proxy/: test<... (200; 25.639353ms)
Dec 25 15:08:16.987: INFO: (19) /api/v1/namespaces/proxy-7685/services/https:proxy-service-ncdcq:tlsportname1/proxy/: tls baz (200; 25.718042ms)
Dec 25 15:08:16.988: INFO: (19) /api/v1/namespaces/proxy-7685/services/proxy-service-ncdcq:portname1/proxy/: foo (200; 26.154182ms)
Dec 25 15:08:16.988: INFO: (19) /api/v1/namespaces/proxy-7685/services/http:proxy-service-ncdcq:portname2/proxy/: bar (200; 26.413579ms)
Dec 25 15:08:16.988: INFO: (19) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-ncdcq-qrpjd:443/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-29cdf3ff-848e-4351-9ba0-cf792b14909a
STEP: Creating a pod to test consume configMaps
Dec 25 15:08:33.063: INFO: Waiting up to 5m0s for pod "pod-configmaps-f0304155-91bf-4e03-aa2f-72a2d9a78e20" in namespace "configmap-3907" to be "success or failure"
Dec 25 15:08:33.084: INFO: Pod "pod-configmaps-f0304155-91bf-4e03-aa2f-72a2d9a78e20": Phase="Pending", Reason="", readiness=false. Elapsed: 20.491237ms
Dec 25 15:08:35.097: INFO: Pod "pod-configmaps-f0304155-91bf-4e03-aa2f-72a2d9a78e20": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034127398s
Dec 25 15:08:37.252: INFO: Pod "pod-configmaps-f0304155-91bf-4e03-aa2f-72a2d9a78e20": Phase="Pending", Reason="", readiness=false. Elapsed: 4.188871052s
Dec 25 15:08:39.264: INFO: Pod "pod-configmaps-f0304155-91bf-4e03-aa2f-72a2d9a78e20": Phase="Pending", Reason="", readiness=false. Elapsed: 6.200822659s
Dec 25 15:08:41.277: INFO: Pod "pod-configmaps-f0304155-91bf-4e03-aa2f-72a2d9a78e20": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.213942518s
STEP: Saw pod success
Dec 25 15:08:41.277: INFO: Pod "pod-configmaps-f0304155-91bf-4e03-aa2f-72a2d9a78e20" satisfied condition "success or failure"
Dec 25 15:08:41.283: INFO: Trying to get logs from node iruya-node pod pod-configmaps-f0304155-91bf-4e03-aa2f-72a2d9a78e20 container configmap-volume-test: 
STEP: delete the pod
Dec 25 15:08:41.428: INFO: Waiting for pod pod-configmaps-f0304155-91bf-4e03-aa2f-72a2d9a78e20 to disappear
Dec 25 15:08:41.439: INFO: Pod pod-configmaps-f0304155-91bf-4e03-aa2f-72a2d9a78e20 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 15:08:41.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3907" for this suite.
Dec 25 15:08:47.499: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 15:08:47.674: INFO: namespace configmap-3907 deletion completed in 6.228059051s

• [SLOW TEST:14.737 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 15:08:47.676: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 15:08:59.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-7188" for this suite.
Dec 25 15:09:05.896: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 15:09:06.019: INFO: namespace kubelet-test-7188 deletion completed in 6.163081827s

• [SLOW TEST:18.344 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 15:09:06.019: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 15:09:14.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2082" for this suite.
Dec 25 15:09:56.199: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 15:09:56.338: INFO: namespace kubelet-test-2082 deletion completed in 42.155445478s

• [SLOW TEST:50.318 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 15:09:56.339: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Dec 25 15:10:03.696: INFO: 0 pods remaining
Dec 25 15:10:03.697: INFO: 0 pods has nil DeletionTimestamp
Dec 25 15:10:03.697: INFO: 
STEP: Gathering metrics
W1225 15:10:04.349821       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 25 15:10:04.350: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 15:10:04.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3987" for this suite.
Dec 25 15:10:14.567: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 15:10:14.709: INFO: namespace gc-3987 deletion completed in 10.351878631s

• [SLOW TEST:18.370 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 15:10:14.711: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Dec 25 15:10:14.870: INFO: Waiting up to 5m0s for pod "pod-b5b8bac6-02db-4326-b956-b0ddd8d3630e" in namespace "emptydir-3979" to be "success or failure"
Dec 25 15:10:14.885: INFO: Pod "pod-b5b8bac6-02db-4326-b956-b0ddd8d3630e": Phase="Pending", Reason="", readiness=false. Elapsed: 14.006912ms
Dec 25 15:10:16.901: INFO: Pod "pod-b5b8bac6-02db-4326-b956-b0ddd8d3630e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030017286s
Dec 25 15:10:18.916: INFO: Pod "pod-b5b8bac6-02db-4326-b956-b0ddd8d3630e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04496866s
Dec 25 15:10:20.929: INFO: Pod "pod-b5b8bac6-02db-4326-b956-b0ddd8d3630e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05798701s
Dec 25 15:10:22.940: INFO: Pod "pod-b5b8bac6-02db-4326-b956-b0ddd8d3630e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.069101603s
Dec 25 15:10:24.957: INFO: Pod "pod-b5b8bac6-02db-4326-b956-b0ddd8d3630e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.085975221s
STEP: Saw pod success
Dec 25 15:10:24.957: INFO: Pod "pod-b5b8bac6-02db-4326-b956-b0ddd8d3630e" satisfied condition "success or failure"
Dec 25 15:10:24.963: INFO: Trying to get logs from node iruya-node pod pod-b5b8bac6-02db-4326-b956-b0ddd8d3630e container test-container: 
STEP: delete the pod
Dec 25 15:10:25.065: INFO: Waiting for pod pod-b5b8bac6-02db-4326-b956-b0ddd8d3630e to disappear
Dec 25 15:10:25.133: INFO: Pod pod-b5b8bac6-02db-4326-b956-b0ddd8d3630e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 15:10:25.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3979" for this suite.
Dec 25 15:10:31.175: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 15:10:31.390: INFO: namespace emptydir-3979 deletion completed in 6.245655638s

• [SLOW TEST:16.679 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 15:10:31.392: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-b62a9197-c360-44a4-91da-c9ea9a24abc5
STEP: Creating a pod to test consume secrets
Dec 25 15:10:31.513: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c2c9005c-913d-4ad2-ae72-cd0ced14ec5b" in namespace "projected-5348" to be "success or failure"
Dec 25 15:10:31.546: INFO: Pod "pod-projected-secrets-c2c9005c-913d-4ad2-ae72-cd0ced14ec5b": Phase="Pending", Reason="", readiness=false. Elapsed: 31.959995ms
Dec 25 15:10:33.554: INFO: Pod "pod-projected-secrets-c2c9005c-913d-4ad2-ae72-cd0ced14ec5b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040591948s
Dec 25 15:10:35.562: INFO: Pod "pod-projected-secrets-c2c9005c-913d-4ad2-ae72-cd0ced14ec5b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048345767s
Dec 25 15:10:37.570: INFO: Pod "pod-projected-secrets-c2c9005c-913d-4ad2-ae72-cd0ced14ec5b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056544283s
Dec 25 15:10:39.592: INFO: Pod "pod-projected-secrets-c2c9005c-913d-4ad2-ae72-cd0ced14ec5b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.077916946s
STEP: Saw pod success
Dec 25 15:10:39.592: INFO: Pod "pod-projected-secrets-c2c9005c-913d-4ad2-ae72-cd0ced14ec5b" satisfied condition "success or failure"
Dec 25 15:10:39.600: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-c2c9005c-913d-4ad2-ae72-cd0ced14ec5b container projected-secret-volume-test: 
STEP: delete the pod
Dec 25 15:10:39.722: INFO: Waiting for pod pod-projected-secrets-c2c9005c-913d-4ad2-ae72-cd0ced14ec5b to disappear
Dec 25 15:10:39.726: INFO: Pod pod-projected-secrets-c2c9005c-913d-4ad2-ae72-cd0ced14ec5b no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 15:10:39.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5348" for this suite.
Dec 25 15:10:45.751: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 15:10:45.949: INFO: namespace projected-5348 deletion completed in 6.218096429s

• [SLOW TEST:14.557 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 15:10:45.949: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 15:11:18.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-7941" for this suite.
Dec 25 15:11:24.427: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 15:11:24.555: INFO: namespace namespaces-7941 deletion completed in 6.192257852s
STEP: Destroying namespace "nsdeletetest-45" for this suite.
Dec 25 15:11:24.561: INFO: Namespace nsdeletetest-45 was already deleted
STEP: Destroying namespace "nsdeletetest-7626" for this suite.
Dec 25 15:11:30.605: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 15:11:30.760: INFO: namespace nsdeletetest-7626 deletion completed in 6.198458354s

• [SLOW TEST:44.811 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 15:11:30.762: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 15:11:30.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-3616" for this suite.
Dec 25 15:11:52.988: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 15:11:53.078: INFO: namespace kubelet-test-3616 deletion completed in 22.115507249s

• [SLOW TEST:22.317 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 15:11:53.079: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-fbad6450-09de-4626-958e-be9bef486489
STEP: Creating a pod to test consume secrets
Dec 25 15:11:53.156: INFO: Waiting up to 5m0s for pod "pod-secrets-c2799154-69a8-4668-b579-90398708a388" in namespace "secrets-4430" to be "success or failure"
Dec 25 15:11:53.161: INFO: Pod "pod-secrets-c2799154-69a8-4668-b579-90398708a388": Phase="Pending", Reason="", readiness=false. Elapsed: 4.383618ms
Dec 25 15:11:55.167: INFO: Pod "pod-secrets-c2799154-69a8-4668-b579-90398708a388": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011026908s
Dec 25 15:11:57.175: INFO: Pod "pod-secrets-c2799154-69a8-4668-b579-90398708a388": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018871688s
Dec 25 15:11:59.183: INFO: Pod "pod-secrets-c2799154-69a8-4668-b579-90398708a388": Phase="Pending", Reason="", readiness=false. Elapsed: 6.026974149s
Dec 25 15:12:01.191: INFO: Pod "pod-secrets-c2799154-69a8-4668-b579-90398708a388": Phase="Pending", Reason="", readiness=false. Elapsed: 8.034457027s
Dec 25 15:12:03.198: INFO: Pod "pod-secrets-c2799154-69a8-4668-b579-90398708a388": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.041926317s
STEP: Saw pod success
Dec 25 15:12:03.198: INFO: Pod "pod-secrets-c2799154-69a8-4668-b579-90398708a388" satisfied condition "success or failure"
Dec 25 15:12:03.203: INFO: Trying to get logs from node iruya-node pod pod-secrets-c2799154-69a8-4668-b579-90398708a388 container secret-volume-test: 
STEP: delete the pod
Dec 25 15:12:03.650: INFO: Waiting for pod pod-secrets-c2799154-69a8-4668-b579-90398708a388 to disappear
Dec 25 15:12:03.664: INFO: Pod pod-secrets-c2799154-69a8-4668-b579-90398708a388 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 15:12:03.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4430" for this suite.
Dec 25 15:12:09.749: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 15:12:09.961: INFO: namespace secrets-4430 deletion completed in 6.242297544s

• [SLOW TEST:16.882 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 15:12:09.961: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 25 15:12:10.049: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-268'
Dec 25 15:12:12.025: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 25 15:12:12.025: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562
Dec 25 15:12:14.075: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-268'
Dec 25 15:12:14.200: INFO: stderr: ""
Dec 25 15:12:14.200: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 15:12:14.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-268" for this suite.
Dec 25 15:12:20.234: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 15:12:20.389: INFO: namespace kubectl-268 deletion completed in 6.183747349s

• [SLOW TEST:10.428 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 15:12:20.391: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 25 15:12:20.607: INFO: (0) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 13.274329ms)
Dec 25 15:12:20.611: INFO: (1) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.027773ms)
Dec 25 15:12:20.617: INFO: (2) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.723872ms)
Dec 25 15:12:20.622: INFO: (3) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.568957ms)
Dec 25 15:12:20.628: INFO: (4) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.10299ms)
Dec 25 15:12:20.635: INFO: (5) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.307244ms)
Dec 25 15:12:20.642: INFO: (6) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.981704ms)
Dec 25 15:12:20.648: INFO: (7) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.226092ms)
Dec 25 15:12:20.654: INFO: (8) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.896269ms)
Dec 25 15:12:20.659: INFO: (9) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.219562ms)
Dec 25 15:12:20.663: INFO: (10) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.181353ms)
Dec 25 15:12:20.669: INFO: (11) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.500368ms)
Dec 25 15:12:20.673: INFO: (12) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.472121ms)
Dec 25 15:12:20.679: INFO: (13) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.337063ms)
Dec 25 15:12:20.716: INFO: (14) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 37.477597ms)
Dec 25 15:12:20.731: INFO: (15) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 14.637716ms)
Dec 25 15:12:20.740: INFO: (16) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.1074ms)
Dec 25 15:12:20.751: INFO: (17) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.194718ms)
Dec 25 15:12:20.764: INFO: (18) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.270949ms)
Dec 25 15:12:20.772: INFO: (19) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.680905ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 15:12:20.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-573" for this suite.
Dec 25 15:12:26.803: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 15:12:26.923: INFO: namespace proxy-573 deletion completed in 6.145474394s

• [SLOW TEST:6.532 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 15:12:26.924: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 15:12:38.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-4455" for this suite.
Dec 25 15:13:00.464: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 15:13:00.582: INFO: namespace replication-controller-4455 deletion completed in 22.170831735s

• [SLOW TEST:33.658 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 15:13:00.583: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 15:13:06.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-4558" for this suite.
Dec 25 15:13:12.287: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 15:13:12.416: INFO: namespace watch-4558 deletion completed in 6.26616165s

• [SLOW TEST:11.833 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 15:13:12.417: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating replication controller my-hostname-basic-75117ade-616d-4b7b-92e9-d55a28cfd18b
Dec 25 15:13:12.563: INFO: Pod name my-hostname-basic-75117ade-616d-4b7b-92e9-d55a28cfd18b: Found 0 pods out of 1
Dec 25 15:13:17.575: INFO: Pod name my-hostname-basic-75117ade-616d-4b7b-92e9-d55a28cfd18b: Found 1 pods out of 1
Dec 25 15:13:17.575: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-75117ade-616d-4b7b-92e9-d55a28cfd18b" are running
Dec 25 15:13:19.589: INFO: Pod "my-hostname-basic-75117ade-616d-4b7b-92e9-d55a28cfd18b-hb582" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-25 15:13:12 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-25 15:13:12 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-75117ade-616d-4b7b-92e9-d55a28cfd18b]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-25 15:13:12 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-75117ade-616d-4b7b-92e9-d55a28cfd18b]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-25 15:13:12 +0000 UTC Reason: Message:}])
Dec 25 15:13:19.589: INFO: Trying to dial the pod
Dec 25 15:13:24.637: INFO: Controller my-hostname-basic-75117ade-616d-4b7b-92e9-d55a28cfd18b: Got expected result from replica 1 [my-hostname-basic-75117ade-616d-4b7b-92e9-d55a28cfd18b-hb582]: "my-hostname-basic-75117ade-616d-4b7b-92e9-d55a28cfd18b-hb582", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 15:13:24.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-3724" for this suite.
Dec 25 15:13:30.685: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 15:13:30.818: INFO: namespace replication-controller-3724 deletion completed in 6.173003708s

• [SLOW TEST:18.402 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 15:13:30.819: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W1225 15:14:01.705747       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 25 15:14:01.705: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 15:14:01.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-4146" for this suite.
Dec 25 15:14:07.757: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 15:14:07.958: INFO: namespace gc-4146 deletion completed in 6.247430515s

• [SLOW TEST:37.139 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 15:14:07.959: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Dec 25 15:14:10.466: INFO: Waiting up to 5m0s for pod "downward-api-3c5dbf33-a0e2-43b8-8cfa-11bbabd190ae" in namespace "downward-api-6345" to be "success or failure"
Dec 25 15:14:10.623: INFO: Pod "downward-api-3c5dbf33-a0e2-43b8-8cfa-11bbabd190ae": Phase="Pending", Reason="", readiness=false. Elapsed: 156.72825ms
Dec 25 15:14:12.828: INFO: Pod "downward-api-3c5dbf33-a0e2-43b8-8cfa-11bbabd190ae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.361355626s
Dec 25 15:14:14.834: INFO: Pod "downward-api-3c5dbf33-a0e2-43b8-8cfa-11bbabd190ae": Phase="Pending", Reason="", readiness=false. Elapsed: 4.36739856s
Dec 25 15:14:16.845: INFO: Pod "downward-api-3c5dbf33-a0e2-43b8-8cfa-11bbabd190ae": Phase="Pending", Reason="", readiness=false. Elapsed: 6.378089411s
Dec 25 15:14:18.883: INFO: Pod "downward-api-3c5dbf33-a0e2-43b8-8cfa-11bbabd190ae": Phase="Pending", Reason="", readiness=false. Elapsed: 8.416528025s
Dec 25 15:14:20.900: INFO: Pod "downward-api-3c5dbf33-a0e2-43b8-8cfa-11bbabd190ae": Phase="Pending", Reason="", readiness=false. Elapsed: 10.433457599s
Dec 25 15:14:22.910: INFO: Pod "downward-api-3c5dbf33-a0e2-43b8-8cfa-11bbabd190ae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.443153701s
STEP: Saw pod success
Dec 25 15:14:22.910: INFO: Pod "downward-api-3c5dbf33-a0e2-43b8-8cfa-11bbabd190ae" satisfied condition "success or failure"
Dec 25 15:14:22.916: INFO: Trying to get logs from node iruya-node pod downward-api-3c5dbf33-a0e2-43b8-8cfa-11bbabd190ae container dapi-container: 
STEP: delete the pod
Dec 25 15:14:23.033: INFO: Waiting for pod downward-api-3c5dbf33-a0e2-43b8-8cfa-11bbabd190ae to disappear
Dec 25 15:14:23.048: INFO: Pod downward-api-3c5dbf33-a0e2-43b8-8cfa-11bbabd190ae no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 15:14:23.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6345" for this suite.
Dec 25 15:14:29.086: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 15:14:29.205: INFO: namespace downward-api-6345 deletion completed in 6.1493014s

• [SLOW TEST:21.246 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 15:14:29.206: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Dec 25 15:14:51.980: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6803 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 25 15:14:51.980: INFO: >>> kubeConfig: /root/.kube/config
Dec 25 15:14:52.398: INFO: Exec stderr: ""
Dec 25 15:14:52.399: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6803 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 25 15:14:52.399: INFO: >>> kubeConfig: /root/.kube/config
Dec 25 15:14:52.855: INFO: Exec stderr: ""
Dec 25 15:14:52.856: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6803 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 25 15:14:52.856: INFO: >>> kubeConfig: /root/.kube/config
Dec 25 15:14:53.212: INFO: Exec stderr: ""
Dec 25 15:14:53.212: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6803 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 25 15:14:53.212: INFO: >>> kubeConfig: /root/.kube/config
Dec 25 15:14:53.568: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Dec 25 15:14:53.568: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6803 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 25 15:14:53.568: INFO: >>> kubeConfig: /root/.kube/config
Dec 25 15:14:54.015: INFO: Exec stderr: ""
Dec 25 15:14:54.016: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6803 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 25 15:14:54.016: INFO: >>> kubeConfig: /root/.kube/config
Dec 25 15:14:54.396: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Dec 25 15:14:54.397: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6803 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 25 15:14:54.397: INFO: >>> kubeConfig: /root/.kube/config
Dec 25 15:14:54.812: INFO: Exec stderr: ""
Dec 25 15:14:54.813: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6803 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 25 15:14:54.813: INFO: >>> kubeConfig: /root/.kube/config
Dec 25 15:14:55.239: INFO: Exec stderr: ""
Dec 25 15:14:55.239: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6803 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 25 15:14:55.240: INFO: >>> kubeConfig: /root/.kube/config
Dec 25 15:14:55.869: INFO: Exec stderr: ""
Dec 25 15:14:55.870: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6803 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 25 15:14:55.870: INFO: >>> kubeConfig: /root/.kube/config
Dec 25 15:14:56.171: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 15:14:56.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-6803" for this suite.
Dec 25 15:15:48.213: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 15:15:48.362: INFO: namespace e2e-kubelet-etc-hosts-6803 deletion completed in 52.181827559s

• [SLOW TEST:79.157 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 15:15:48.363: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 25 15:15:48.507: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7279182a-616a-4fc7-b8b1-e2416b3c1456" in namespace "downward-api-6743" to be "success or failure"
Dec 25 15:15:48.516: INFO: Pod "downwardapi-volume-7279182a-616a-4fc7-b8b1-e2416b3c1456": Phase="Pending", Reason="", readiness=false. Elapsed: 9.04243ms
Dec 25 15:15:50.530: INFO: Pod "downwardapi-volume-7279182a-616a-4fc7-b8b1-e2416b3c1456": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023314875s
Dec 25 15:15:52.559: INFO: Pod "downwardapi-volume-7279182a-616a-4fc7-b8b1-e2416b3c1456": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051823425s
Dec 25 15:15:54.572: INFO: Pod "downwardapi-volume-7279182a-616a-4fc7-b8b1-e2416b3c1456": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064569414s
Dec 25 15:15:56.591: INFO: Pod "downwardapi-volume-7279182a-616a-4fc7-b8b1-e2416b3c1456": Phase="Pending", Reason="", readiness=false. Elapsed: 8.083849203s
Dec 25 15:15:58.611: INFO: Pod "downwardapi-volume-7279182a-616a-4fc7-b8b1-e2416b3c1456": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.104141091s
STEP: Saw pod success
Dec 25 15:15:58.612: INFO: Pod "downwardapi-volume-7279182a-616a-4fc7-b8b1-e2416b3c1456" satisfied condition "success or failure"
Dec 25 15:15:58.645: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-7279182a-616a-4fc7-b8b1-e2416b3c1456 container client-container: 
STEP: delete the pod
Dec 25 15:15:58.718: INFO: Waiting for pod downwardapi-volume-7279182a-616a-4fc7-b8b1-e2416b3c1456 to disappear
Dec 25 15:15:58.739: INFO: Pod downwardapi-volume-7279182a-616a-4fc7-b8b1-e2416b3c1456 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 15:15:58.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6743" for this suite.
Dec 25 15:16:04.770: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 15:16:04.921: INFO: namespace downward-api-6743 deletion completed in 6.175527943s

• [SLOW TEST:16.559 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 15:16:04.923: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Dec 25 15:16:05.027: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-5851,SelfLink:/api/v1/namespaces/watch-5851/configmaps/e2e-watch-test-configmap-a,UID:f0344787-760e-4316-af81-5f14c817e2b1,ResourceVersion:18034388,Generation:0,CreationTimestamp:2019-12-25 15:16:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 25 15:16:05.028: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-5851,SelfLink:/api/v1/namespaces/watch-5851/configmaps/e2e-watch-test-configmap-a,UID:f0344787-760e-4316-af81-5f14c817e2b1,ResourceVersion:18034388,Generation:0,CreationTimestamp:2019-12-25 15:16:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Dec 25 15:16:15.040: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-5851,SelfLink:/api/v1/namespaces/watch-5851/configmaps/e2e-watch-test-configmap-a,UID:f0344787-760e-4316-af81-5f14c817e2b1,ResourceVersion:18034402,Generation:0,CreationTimestamp:2019-12-25 15:16:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Dec 25 15:16:15.041: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-5851,SelfLink:/api/v1/namespaces/watch-5851/configmaps/e2e-watch-test-configmap-a,UID:f0344787-760e-4316-af81-5f14c817e2b1,ResourceVersion:18034402,Generation:0,CreationTimestamp:2019-12-25 15:16:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Dec 25 15:16:25.054: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-5851,SelfLink:/api/v1/namespaces/watch-5851/configmaps/e2e-watch-test-configmap-a,UID:f0344787-760e-4316-af81-5f14c817e2b1,ResourceVersion:18034418,Generation:0,CreationTimestamp:2019-12-25 15:16:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 25 15:16:25.055: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-5851,SelfLink:/api/v1/namespaces/watch-5851/configmaps/e2e-watch-test-configmap-a,UID:f0344787-760e-4316-af81-5f14c817e2b1,ResourceVersion:18034418,Generation:0,CreationTimestamp:2019-12-25 15:16:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Dec 25 15:16:35.077: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-5851,SelfLink:/api/v1/namespaces/watch-5851/configmaps/e2e-watch-test-configmap-a,UID:f0344787-760e-4316-af81-5f14c817e2b1,ResourceVersion:18034432,Generation:0,CreationTimestamp:2019-12-25 15:16:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 25 15:16:35.077: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-5851,SelfLink:/api/v1/namespaces/watch-5851/configmaps/e2e-watch-test-configmap-a,UID:f0344787-760e-4316-af81-5f14c817e2b1,ResourceVersion:18034432,Generation:0,CreationTimestamp:2019-12-25 15:16:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Dec 25 15:16:45.093: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-5851,SelfLink:/api/v1/namespaces/watch-5851/configmaps/e2e-watch-test-configmap-b,UID:a9fa78d1-eda4-4a2e-90d3-e2f0d1fa925b,ResourceVersion:18034447,Generation:0,CreationTimestamp:2019-12-25 15:16:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 25 15:16:45.094: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-5851,SelfLink:/api/v1/namespaces/watch-5851/configmaps/e2e-watch-test-configmap-b,UID:a9fa78d1-eda4-4a2e-90d3-e2f0d1fa925b,ResourceVersion:18034447,Generation:0,CreationTimestamp:2019-12-25 15:16:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Dec 25 15:16:55.105: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-5851,SelfLink:/api/v1/namespaces/watch-5851/configmaps/e2e-watch-test-configmap-b,UID:a9fa78d1-eda4-4a2e-90d3-e2f0d1fa925b,ResourceVersion:18034461,Generation:0,CreationTimestamp:2019-12-25 15:16:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 25 15:16:55.105: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-5851,SelfLink:/api/v1/namespaces/watch-5851/configmaps/e2e-watch-test-configmap-b,UID:a9fa78d1-eda4-4a2e-90d3-e2f0d1fa925b,ResourceVersion:18034461,Generation:0,CreationTimestamp:2019-12-25 15:16:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 15:17:05.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-5851" for this suite.
Dec 25 15:17:11.175: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 15:17:11.324: INFO: namespace watch-5851 deletion completed in 6.20753701s

• [SLOW TEST:66.402 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 15:17:11.326: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Dec 25 15:17:11.450: INFO: Waiting up to 5m0s for pod "pod-4ac500b1-135e-490e-9e55-421a4bc61635" in namespace "emptydir-5378" to be "success or failure"
Dec 25 15:17:11.469: INFO: Pod "pod-4ac500b1-135e-490e-9e55-421a4bc61635": Phase="Pending", Reason="", readiness=false. Elapsed: 18.074084ms
Dec 25 15:17:13.478: INFO: Pod "pod-4ac500b1-135e-490e-9e55-421a4bc61635": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027878063s
Dec 25 15:17:15.489: INFO: Pod "pod-4ac500b1-135e-490e-9e55-421a4bc61635": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038375332s
Dec 25 15:17:17.499: INFO: Pod "pod-4ac500b1-135e-490e-9e55-421a4bc61635": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048642823s
Dec 25 15:17:19.694: INFO: Pod "pod-4ac500b1-135e-490e-9e55-421a4bc61635": Phase="Pending", Reason="", readiness=false. Elapsed: 8.243792665s
Dec 25 15:17:21.709: INFO: Pod "pod-4ac500b1-135e-490e-9e55-421a4bc61635": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.258250709s
STEP: Saw pod success
Dec 25 15:17:21.709: INFO: Pod "pod-4ac500b1-135e-490e-9e55-421a4bc61635" satisfied condition "success or failure"
Dec 25 15:17:21.719: INFO: Trying to get logs from node iruya-node pod pod-4ac500b1-135e-490e-9e55-421a4bc61635 container test-container: 
STEP: delete the pod
Dec 25 15:17:21.811: INFO: Waiting for pod pod-4ac500b1-135e-490e-9e55-421a4bc61635 to disappear
Dec 25 15:17:21.947: INFO: Pod pod-4ac500b1-135e-490e-9e55-421a4bc61635 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 15:17:21.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5378" for this suite.
Dec 25 15:17:28.011: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 15:17:28.195: INFO: namespace emptydir-5378 deletion completed in 6.239045809s

• [SLOW TEST:16.869 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 25 15:17:28.196: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Dec 25 15:17:28.289: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7517'
Dec 25 15:17:28.636: INFO: stderr: ""
Dec 25 15:17:28.637: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 25 15:17:28.637: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7517'
Dec 25 15:17:29.067: INFO: stderr: ""
Dec 25 15:17:29.067: INFO: stdout: "update-demo-nautilus-2lxc6 update-demo-nautilus-7f7jp "
Dec 25 15:17:29.068: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2lxc6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7517'
Dec 25 15:17:30.026: INFO: stderr: ""
Dec 25 15:17:30.027: INFO: stdout: ""
Dec 25 15:17:30.027: INFO: update-demo-nautilus-2lxc6 is created but not running
Dec 25 15:17:35.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7517'
Dec 25 15:17:35.269: INFO: stderr: ""
Dec 25 15:17:35.269: INFO: stdout: "update-demo-nautilus-2lxc6 update-demo-nautilus-7f7jp "
Dec 25 15:17:35.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2lxc6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7517'
Dec 25 15:17:35.799: INFO: stderr: ""
Dec 25 15:17:35.799: INFO: stdout: ""
Dec 25 15:17:35.800: INFO: update-demo-nautilus-2lxc6 is created but not running
Dec 25 15:17:40.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7517'
Dec 25 15:17:40.956: INFO: stderr: ""
Dec 25 15:17:40.956: INFO: stdout: "update-demo-nautilus-2lxc6 update-demo-nautilus-7f7jp "
Dec 25 15:17:40.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2lxc6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7517'
Dec 25 15:17:41.034: INFO: stderr: ""
Dec 25 15:17:41.035: INFO: stdout: "true"
Dec 25 15:17:41.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2lxc6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7517'
Dec 25 15:17:41.114: INFO: stderr: ""
Dec 25 15:17:41.114: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 25 15:17:41.114: INFO: validating pod update-demo-nautilus-2lxc6
Dec 25 15:17:41.134: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 25 15:17:41.134: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 25 15:17:41.134: INFO: update-demo-nautilus-2lxc6 is verified up and running
Dec 25 15:17:41.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7f7jp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7517'
Dec 25 15:17:41.251: INFO: stderr: ""
Dec 25 15:17:41.251: INFO: stdout: "true"
Dec 25 15:17:41.251: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7f7jp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7517'
Dec 25 15:17:41.333: INFO: stderr: ""
Dec 25 15:17:41.333: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 25 15:17:41.333: INFO: validating pod update-demo-nautilus-7f7jp
Dec 25 15:17:41.349: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 25 15:17:41.349: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 25 15:17:41.349: INFO: update-demo-nautilus-7f7jp is verified up and running
STEP: scaling down the replication controller
Dec 25 15:17:41.351: INFO: scanned /root for discovery docs: 
Dec 25 15:17:41.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-7517'
Dec 25 15:17:42.537: INFO: stderr: ""
Dec 25 15:17:42.537: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 25 15:17:42.538: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7517'
Dec 25 15:17:42.687: INFO: stderr: ""
Dec 25 15:17:42.687: INFO: stdout: "update-demo-nautilus-2lxc6 update-demo-nautilus-7f7jp "
STEP: Replicas for name=update-demo: expected=1 actual=2
Dec 25 15:17:47.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7517'
Dec 25 15:17:47.917: INFO: stderr: ""
Dec 25 15:17:47.917: INFO: stdout: "update-demo-nautilus-7f7jp "
Dec 25 15:17:47.917: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7f7jp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7517'
Dec 25 15:17:48.132: INFO: stderr: ""
Dec 25 15:17:48.133: INFO: stdout: "true"
Dec 25 15:17:48.133: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7f7jp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7517'
Dec 25 15:17:48.208: INFO: stderr: ""
Dec 25 15:17:48.208: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 25 15:17:48.208: INFO: validating pod update-demo-nautilus-7f7jp
Dec 25 15:17:48.213: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 25 15:17:48.213: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 25 15:17:48.213: INFO: update-demo-nautilus-7f7jp is verified up and running
STEP: scaling up the replication controller
Dec 25 15:17:48.217: INFO: scanned /root for discovery docs: 
Dec 25 15:17:48.217: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-7517'
Dec 25 15:17:49.399: INFO: stderr: ""
Dec 25 15:17:49.399: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 25 15:17:49.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7517'
Dec 25 15:17:49.586: INFO: stderr: ""
Dec 25 15:17:49.586: INFO: stdout: "update-demo-nautilus-7f7jp update-demo-nautilus-m5mpx "
Dec 25 15:17:49.587: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7f7jp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7517'
Dec 25 15:17:49.779: INFO: stderr: ""
Dec 25 15:17:49.780: INFO: stdout: "true"
Dec 25 15:17:49.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7f7jp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7517'
Dec 25 15:17:49.885: INFO: stderr: ""
Dec 25 15:17:49.885: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 25 15:17:49.885: INFO: validating pod update-demo-nautilus-7f7jp
Dec 25 15:17:49.894: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 25 15:17:49.895: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 25 15:17:49.895: INFO: update-demo-nautilus-7f7jp is verified up and running
Dec 25 15:17:49.895: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m5mpx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7517'
Dec 25 15:17:50.069: INFO: stderr: ""
Dec 25 15:17:50.069: INFO: stdout: ""
Dec 25 15:17:50.069: INFO: update-demo-nautilus-m5mpx is created but not running
Dec 25 15:17:55.069: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7517'
Dec 25 15:17:55.268: INFO: stderr: ""
Dec 25 15:17:55.268: INFO: stdout: "update-demo-nautilus-7f7jp update-demo-nautilus-m5mpx "
Dec 25 15:17:55.268: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7f7jp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7517'
Dec 25 15:17:55.353: INFO: stderr: ""
Dec 25 15:17:55.353: INFO: stdout: "true"
Dec 25 15:17:55.353: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7f7jp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7517'
Dec 25 15:17:55.432: INFO: stderr: ""
Dec 25 15:17:55.432: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 25 15:17:55.432: INFO: validating pod update-demo-nautilus-7f7jp
Dec 25 15:17:55.437: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 25 15:17:55.437: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 25 15:17:55.437: INFO: update-demo-nautilus-7f7jp is verified up and running
Dec 25 15:17:55.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m5mpx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7517'
Dec 25 15:17:55.574: INFO: stderr: ""
Dec 25 15:17:55.574: INFO: stdout: ""
Dec 25 15:17:55.574: INFO: update-demo-nautilus-m5mpx is created but not running
Dec 25 15:18:00.576: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7517'
Dec 25 15:18:00.703: INFO: stderr: ""
Dec 25 15:18:00.703: INFO: stdout: "update-demo-nautilus-7f7jp update-demo-nautilus-m5mpx "
Dec 25 15:18:00.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7f7jp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7517'
Dec 25 15:18:00.809: INFO: stderr: ""
Dec 25 15:18:00.809: INFO: stdout: "true"
Dec 25 15:18:00.809: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7f7jp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7517'
Dec 25 15:18:00.894: INFO: stderr: ""
Dec 25 15:18:00.894: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 25 15:18:00.895: INFO: validating pod update-demo-nautilus-7f7jp
Dec 25 15:18:00.901: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 25 15:18:00.901: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 25 15:18:00.901: INFO: update-demo-nautilus-7f7jp is verified up and running
Dec 25 15:18:00.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m5mpx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7517'
Dec 25 15:18:01.005: INFO: stderr: ""
Dec 25 15:18:01.006: INFO: stdout: "true"
Dec 25 15:18:01.006: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m5mpx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7517'
Dec 25 15:18:01.129: INFO: stderr: ""
Dec 25 15:18:01.129: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 25 15:18:01.129: INFO: validating pod update-demo-nautilus-m5mpx
Dec 25 15:18:01.151: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 25 15:18:01.151: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 25 15:18:01.151: INFO: update-demo-nautilus-m5mpx is verified up and running
STEP: using delete to clean up resources
Dec 25 15:18:01.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7517'
Dec 25 15:18:01.281: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 25 15:18:01.281: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Dec 25 15:18:01.281: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7517'
Dec 25 15:18:01.415: INFO: stderr: "No resources found.\n"
Dec 25 15:18:01.415: INFO: stdout: ""
Dec 25 15:18:01.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7517 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 25 15:18:01.693: INFO: stderr: ""
Dec 25 15:18:01.693: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 25 15:18:01.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7517" for this suite.
Dec 25 15:18:23.813: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 25 15:18:24.001: INFO: namespace kubectl-7517 deletion completed in 22.29420877s

• [SLOW TEST:55.805 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSDec 25 15:18:24.001: INFO: Running AfterSuite actions on all nodes
Dec 25 15:18:24.002: INFO: Running AfterSuite actions on node 1
Dec 25 15:18:24.002: INFO: Skipping dumping logs from cluster

Ran 215 of 4412 Specs in 8533.057 seconds
SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped
PASS