I1217 12:56:09.389024 8 e2e.go:243] Starting e2e run "62618ab6-e8a5-4484-b827-2fa21745f237" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1576587368 - Will randomize all specs Will run 215 of 4412 specs Dec 17 12:56:09.620: INFO: >>> kubeConfig: /root/.kube/config Dec 17 12:56:09.626: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Dec 17 12:56:09.656: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Dec 17 12:56:09.683: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Dec 17 12:56:09.683: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Dec 17 12:56:09.683: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Dec 17 12:56:09.691: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Dec 17 12:56:09.691: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Dec 17 12:56:09.691: INFO: e2e test version: v1.15.7 Dec 17 12:56:09.693: INFO: kube-apiserver version: v1.15.1 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 12:56:09.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion Dec 17 12:56:09.892: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's args Dec 17 12:56:09.915: INFO: Waiting up to 5m0s for pod "var-expansion-29fe2be7-7bf9-4470-b0ff-466f9b4bcecb" in namespace "var-expansion-7287" to be "success or failure" Dec 17 12:56:09.921: INFO: Pod "var-expansion-29fe2be7-7bf9-4470-b0ff-466f9b4bcecb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.024143ms Dec 17 12:56:11.930: INFO: Pod "var-expansion-29fe2be7-7bf9-4470-b0ff-466f9b4bcecb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015075293s Dec 17 12:56:13.942: INFO: Pod "var-expansion-29fe2be7-7bf9-4470-b0ff-466f9b4bcecb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027260829s Dec 17 12:56:15.950: INFO: Pod "var-expansion-29fe2be7-7bf9-4470-b0ff-466f9b4bcecb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.035398788s Dec 17 12:56:18.203: INFO: Pod "var-expansion-29fe2be7-7bf9-4470-b0ff-466f9b4bcecb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.287720776s Dec 17 12:56:20.213: INFO: Pod "var-expansion-29fe2be7-7bf9-4470-b0ff-466f9b4bcecb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.29786629s Dec 17 12:56:22.223: INFO: Pod "var-expansion-29fe2be7-7bf9-4470-b0ff-466f9b4bcecb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.308578472s STEP: Saw pod success Dec 17 12:56:22.224: INFO: Pod "var-expansion-29fe2be7-7bf9-4470-b0ff-466f9b4bcecb" satisfied condition "success or failure" Dec 17 12:56:22.227: INFO: Trying to get logs from node iruya-node pod var-expansion-29fe2be7-7bf9-4470-b0ff-466f9b4bcecb container dapi-container: STEP: delete the pod Dec 17 12:56:22.317: INFO: Waiting for pod var-expansion-29fe2be7-7bf9-4470-b0ff-466f9b4bcecb to disappear Dec 17 12:56:22.321: INFO: Pod var-expansion-29fe2be7-7bf9-4470-b0ff-466f9b4bcecb no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 12:56:22.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7287" for this suite. Dec 17 12:56:28.423: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 12:56:28.574: INFO: namespace var-expansion-7287 deletion completed in 6.246995204s • [SLOW TEST:18.882 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 12:56:28.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Dec 17 12:56:28.698: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Dec 17 12:56:28.721: INFO: Waiting for terminating namespaces to be deleted... Dec 17 12:56:28.724: INFO: Logging pods the kubelet thinks is on node iruya-node before test Dec 17 12:56:28.736: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded) Dec 17 12:56:28.736: INFO: Container weave ready: true, restart count 0 Dec 17 12:56:28.736: INFO: Container weave-npc ready: true, restart count 0 Dec 17 12:56:28.736: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded) Dec 17 12:56:28.736: INFO: Container kube-proxy ready: true, restart count 0 Dec 17 12:56:28.736: INFO: Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test Dec 17 12:56:28.746: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded) Dec 17 12:56:28.746: INFO: Container kube-apiserver ready: true, restart count 0 Dec 17 12:56:28.746: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded) Dec 17 12:56:28.746: INFO: Container kube-scheduler ready: true, restart count 7 Dec 17 12:56:28.746: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Dec 17 12:56:28.746: INFO: Container coredns ready: true, restart count 0 Dec 17 12:56:28.746: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Dec 17 12:56:28.746: INFO: Container coredns ready: true, restart count 0 Dec 17 12:56:28.746: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded) Dec 17 12:56:28.746: INFO: Container etcd ready: true, restart count 0 Dec 17 12:56:28.746: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded) Dec 17 12:56:28.746: INFO: Container weave ready: true, restart count 0 Dec 17 12:56:28.746: INFO: Container weave-npc ready: true, restart count 0 Dec 17 12:56:28.746: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded) Dec 17 12:56:28.746: INFO: Container kube-controller-manager ready: true, restart count 10 Dec 17 12:56:28.746: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded) Dec 17 12:56:28.746: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: verifying the node has the label node iruya-node STEP: verifying the node has the label node iruya-server-sfge57q7djm7 Dec 17 12:56:28.884: INFO: Pod coredns-5c98db65d4-bm4gs requesting resource cpu=100m on Node iruya-server-sfge57q7djm7 Dec 17 12:56:28.884: INFO: Pod coredns-5c98db65d4-xx8w8 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7 Dec 17 12:56:28.884: INFO: Pod etcd-iruya-server-sfge57q7djm7 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7 Dec 17 12:56:28.884: INFO: Pod kube-apiserver-iruya-server-sfge57q7djm7 requesting resource cpu=250m on Node iruya-server-sfge57q7djm7 Dec 17 12:56:28.884: INFO: Pod kube-controller-manager-iruya-server-sfge57q7djm7 requesting resource cpu=200m on Node iruya-server-sfge57q7djm7 Dec 17 12:56:28.884: INFO: Pod kube-proxy-58v95 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7 Dec 17 12:56:28.884: INFO: Pod kube-proxy-976zl requesting resource cpu=0m on Node iruya-node Dec 17 12:56:28.884: INFO: Pod kube-scheduler-iruya-server-sfge57q7djm7 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7 Dec 17 12:56:28.884: INFO: Pod weave-net-bzl4d requesting resource cpu=20m on Node iruya-server-sfge57q7djm7 Dec 17 12:56:28.884: INFO: Pod weave-net-rlp57 requesting resource cpu=20m on Node iruya-node STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-6cca1949-d76e-4a4f-8efd-598bf81cd397.15e129e08e17b6d6], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7281/filler-pod-6cca1949-d76e-4a4f-8efd-598bf81cd397 to iruya-server-sfge57q7djm7] STEP: Considering event: Type = [Normal], Name = [filler-pod-6cca1949-d76e-4a4f-8efd-598bf81cd397.15e129e1f2230230], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-6cca1949-d76e-4a4f-8efd-598bf81cd397.15e129e2a59c84da], Reason = [Created], Message = [Created container filler-pod-6cca1949-d76e-4a4f-8efd-598bf81cd397] STEP: Considering event: Type = [Normal], Name = [filler-pod-6cca1949-d76e-4a4f-8efd-598bf81cd397.15e129e2ccbf7ea0], Reason = [Started], Message = [Started container filler-pod-6cca1949-d76e-4a4f-8efd-598bf81cd397] STEP: Considering event: Type = [Normal], Name = [filler-pod-fda56695-eca7-4aeb-8872-fa94291a0e97.15e129e0896e038a], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7281/filler-pod-fda56695-eca7-4aeb-8872-fa94291a0e97 to iruya-node] STEP: Considering event: Type = [Normal], Name = [filler-pod-fda56695-eca7-4aeb-8872-fa94291a0e97.15e129e1ef5e3c50], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-fda56695-eca7-4aeb-8872-fa94291a0e97.15e129e2aa2ecabe], Reason = [Created], Message = [Created container filler-pod-fda56695-eca7-4aeb-8872-fa94291a0e97] STEP: Considering event: Type = [Normal], Name = [filler-pod-fda56695-eca7-4aeb-8872-fa94291a0e97.15e129e2da671154], Reason = [Started], Message = [Started container filler-pod-fda56695-eca7-4aeb-8872-fa94291a0e97] STEP: Considering event: Type = [Warning], Name = [additional-pod.15e129e3570fa3dc], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.] STEP: removing the label node off the node iruya-node STEP: verifying the node doesn't have the label node STEP: removing the label node off the node iruya-server-sfge57q7djm7 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 12:56:42.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7281" for this suite. Dec 17 12:56:48.196: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 12:56:48.310: INFO: namespace sched-pred-7281 deletion completed in 6.234611637s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:19.736 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 12:56:48.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Dec 17 12:56:49.741: INFO: Waiting up to 5m0s for pod "pod-fd92211e-6a5e-459c-9951-be9dd5c4531a" in namespace "emptydir-1967" to be "success or failure" Dec 17 12:56:49.796: INFO: Pod "pod-fd92211e-6a5e-459c-9951-be9dd5c4531a": Phase="Pending", Reason="", readiness=false. Elapsed: 53.977433ms Dec 17 12:56:51.872: INFO: Pod "pod-fd92211e-6a5e-459c-9951-be9dd5c4531a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.12998872s Dec 17 12:56:53.886: INFO: Pod "pod-fd92211e-6a5e-459c-9951-be9dd5c4531a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.144015339s Dec 17 12:56:55.901: INFO: Pod "pod-fd92211e-6a5e-459c-9951-be9dd5c4531a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.159174715s Dec 17 12:56:57.910: INFO: Pod "pod-fd92211e-6a5e-459c-9951-be9dd5c4531a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.168710762s Dec 17 12:56:59.924: INFO: Pod "pod-fd92211e-6a5e-459c-9951-be9dd5c4531a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.182253194s Dec 17 12:57:01.933: INFO: Pod "pod-fd92211e-6a5e-459c-9951-be9dd5c4531a": Phase="Pending", Reason="", readiness=false. Elapsed: 12.191321498s Dec 17 12:57:03.947: INFO: Pod "pod-fd92211e-6a5e-459c-9951-be9dd5c4531a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.20566461s STEP: Saw pod success Dec 17 12:57:03.948: INFO: Pod "pod-fd92211e-6a5e-459c-9951-be9dd5c4531a" satisfied condition "success or failure" Dec 17 12:57:03.953: INFO: Trying to get logs from node iruya-node pod pod-fd92211e-6a5e-459c-9951-be9dd5c4531a container test-container: STEP: delete the pod Dec 17 12:57:04.120: INFO: Waiting for pod pod-fd92211e-6a5e-459c-9951-be9dd5c4531a to disappear Dec 17 12:57:04.128: INFO: Pod pod-fd92211e-6a5e-459c-9951-be9dd5c4531a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 12:57:04.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1967" for this suite. Dec 17 12:57:10.175: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 12:57:10.271: INFO: namespace emptydir-1967 deletion completed in 6.134366292s • [SLOW TEST:21.960 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 12:57:10.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Dec 17 12:57:10.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-376' Dec 17 12:57:12.200: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Dec 17 12:57:12.201: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 Dec 17 12:57:14.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-376' Dec 17 12:57:14.518: INFO: stderr: "" Dec 17 12:57:14.519: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 12:57:14.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-376" for this suite. Dec 17 12:57:20.647: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 12:57:20.854: INFO: namespace kubectl-376 deletion completed in 6.321833194s • [SLOW TEST:10.583 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 12:57:20.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Dec 17 12:57:20.951: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5719,SelfLink:/api/v1/namespaces/watch-5719/configmaps/e2e-watch-test-label-changed,UID:c89b9dbc-f875-4a65-a11b-c072427eb1ef,ResourceVersion:17008990,Generation:0,CreationTimestamp:2019-12-17 12:57:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Dec 17 12:57:20.952: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5719,SelfLink:/api/v1/namespaces/watch-5719/configmaps/e2e-watch-test-label-changed,UID:c89b9dbc-f875-4a65-a11b-c072427eb1ef,ResourceVersion:17008991,Generation:0,CreationTimestamp:2019-12-17 12:57:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Dec 17 12:57:20.952: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5719,SelfLink:/api/v1/namespaces/watch-5719/configmaps/e2e-watch-test-label-changed,UID:c89b9dbc-f875-4a65-a11b-c072427eb1ef,ResourceVersion:17008992,Generation:0,CreationTimestamp:2019-12-17 12:57:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Dec 17 12:57:31.081: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5719,SelfLink:/api/v1/namespaces/watch-5719/configmaps/e2e-watch-test-label-changed,UID:c89b9dbc-f875-4a65-a11b-c072427eb1ef,ResourceVersion:17009007,Generation:0,CreationTimestamp:2019-12-17 12:57:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Dec 17 12:57:31.082: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5719,SelfLink:/api/v1/namespaces/watch-5719/configmaps/e2e-watch-test-label-changed,UID:c89b9dbc-f875-4a65-a11b-c072427eb1ef,ResourceVersion:17009008,Generation:0,CreationTimestamp:2019-12-17 12:57:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Dec 17 12:57:31.082: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5719,SelfLink:/api/v1/namespaces/watch-5719/configmaps/e2e-watch-test-label-changed,UID:c89b9dbc-f875-4a65-a11b-c072427eb1ef,ResourceVersion:17009009,Generation:0,CreationTimestamp:2019-12-17 12:57:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 12:57:31.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5719" for this suite. Dec 17 12:57:37.164: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 12:57:37.380: INFO: namespace watch-5719 deletion completed in 6.29443548s • [SLOW TEST:16.524 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 12:57:37.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Dec 17 12:57:37.524: INFO: Waiting up to 5m0s for pod "downward-api-cc07a597-5dc3-4a55-aeeb-39c1eb3a48e3" in namespace "downward-api-7033" to be "success or failure" Dec 17 12:57:37.542: INFO: Pod "downward-api-cc07a597-5dc3-4a55-aeeb-39c1eb3a48e3": Phase="Pending", Reason="", readiness=false. Elapsed: 17.517016ms Dec 17 12:57:39.555: INFO: Pod "downward-api-cc07a597-5dc3-4a55-aeeb-39c1eb3a48e3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031106891s Dec 17 12:57:41.571: INFO: Pod "downward-api-cc07a597-5dc3-4a55-aeeb-39c1eb3a48e3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046952051s Dec 17 12:57:43.587: INFO: Pod "downward-api-cc07a597-5dc3-4a55-aeeb-39c1eb3a48e3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062615361s Dec 17 12:57:45.640: INFO: Pod "downward-api-cc07a597-5dc3-4a55-aeeb-39c1eb3a48e3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.115473382s Dec 17 12:57:47.656: INFO: Pod "downward-api-cc07a597-5dc3-4a55-aeeb-39c1eb3a48e3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.13139659s STEP: Saw pod success Dec 17 12:57:47.656: INFO: Pod "downward-api-cc07a597-5dc3-4a55-aeeb-39c1eb3a48e3" satisfied condition "success or failure" Dec 17 12:57:47.661: INFO: Trying to get logs from node iruya-node pod downward-api-cc07a597-5dc3-4a55-aeeb-39c1eb3a48e3 container dapi-container: STEP: delete the pod Dec 17 12:57:47.725: INFO: Waiting for pod downward-api-cc07a597-5dc3-4a55-aeeb-39c1eb3a48e3 to disappear Dec 17 12:57:47.798: INFO: Pod downward-api-cc07a597-5dc3-4a55-aeeb-39c1eb3a48e3 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 12:57:47.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7033" for this suite. Dec 17 12:57:53.858: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 12:57:53.974: INFO: namespace downward-api-7033 deletion completed in 6.15946078s • [SLOW TEST:16.593 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 12:57:53.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's command Dec 17 12:57:54.132: INFO: Waiting up to 5m0s for pod "var-expansion-54e64043-ce71-4ae5-9a65-8e3a3ec964e7" in namespace "var-expansion-7819" to be "success or failure" Dec 17 12:57:54.147: INFO: Pod "var-expansion-54e64043-ce71-4ae5-9a65-8e3a3ec964e7": Phase="Pending", Reason="", readiness=false. Elapsed: 15.185197ms Dec 17 12:57:56.161: INFO: Pod "var-expansion-54e64043-ce71-4ae5-9a65-8e3a3ec964e7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028583712s Dec 17 12:57:58.170: INFO: Pod "var-expansion-54e64043-ce71-4ae5-9a65-8e3a3ec964e7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037596665s Dec 17 12:58:00.183: INFO: Pod "var-expansion-54e64043-ce71-4ae5-9a65-8e3a3ec964e7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050987776s Dec 17 12:58:02.204: INFO: Pod "var-expansion-54e64043-ce71-4ae5-9a65-8e3a3ec964e7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.071406801s Dec 17 12:58:04.214: INFO: Pod "var-expansion-54e64043-ce71-4ae5-9a65-8e3a3ec964e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.081708169s STEP: Saw pod success Dec 17 12:58:04.214: INFO: Pod "var-expansion-54e64043-ce71-4ae5-9a65-8e3a3ec964e7" satisfied condition "success or failure" Dec 17 12:58:04.218: INFO: Trying to get logs from node iruya-node pod var-expansion-54e64043-ce71-4ae5-9a65-8e3a3ec964e7 container dapi-container: STEP: delete the pod Dec 17 12:58:04.300: INFO: Waiting for pod var-expansion-54e64043-ce71-4ae5-9a65-8e3a3ec964e7 to disappear Dec 17 12:58:04.310: INFO: Pod var-expansion-54e64043-ce71-4ae5-9a65-8e3a3ec964e7 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 12:58:04.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7819" for this suite. Dec 17 12:58:10.342: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 12:58:10.449: INFO: namespace var-expansion-7819 deletion completed in 6.132995079s • [SLOW TEST:16.475 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 12:58:10.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Dec 17 12:58:10.661: INFO: Waiting up to 5m0s for pod "pod-17ce8eb8-6293-4052-8fa1-463935b0ad75" in namespace "emptydir-5518" to be "success or failure" Dec 17 12:58:10.672: INFO: Pod "pod-17ce8eb8-6293-4052-8fa1-463935b0ad75": Phase="Pending", Reason="", readiness=false. Elapsed: 10.612491ms Dec 17 12:58:12.790: INFO: Pod "pod-17ce8eb8-6293-4052-8fa1-463935b0ad75": Phase="Pending", Reason="", readiness=false. Elapsed: 2.129109033s Dec 17 12:58:14.800: INFO: Pod "pod-17ce8eb8-6293-4052-8fa1-463935b0ad75": Phase="Pending", Reason="", readiness=false. Elapsed: 4.138212476s Dec 17 12:58:16.832: INFO: Pod "pod-17ce8eb8-6293-4052-8fa1-463935b0ad75": Phase="Pending", Reason="", readiness=false. Elapsed: 6.170742661s Dec 17 12:58:18.840: INFO: Pod "pod-17ce8eb8-6293-4052-8fa1-463935b0ad75": Phase="Pending", Reason="", readiness=false. Elapsed: 8.178262186s Dec 17 12:58:20.848: INFO: Pod "pod-17ce8eb8-6293-4052-8fa1-463935b0ad75": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.186382906s STEP: Saw pod success Dec 17 12:58:20.848: INFO: Pod "pod-17ce8eb8-6293-4052-8fa1-463935b0ad75" satisfied condition "success or failure" Dec 17 12:58:20.853: INFO: Trying to get logs from node iruya-node pod pod-17ce8eb8-6293-4052-8fa1-463935b0ad75 container test-container: STEP: delete the pod Dec 17 12:58:21.078: INFO: Waiting for pod pod-17ce8eb8-6293-4052-8fa1-463935b0ad75 to disappear Dec 17 12:58:21.087: INFO: Pod pod-17ce8eb8-6293-4052-8fa1-463935b0ad75 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 12:58:21.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5518" for this suite. Dec 17 12:58:27.125: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 12:58:27.276: INFO: namespace emptydir-5518 deletion completed in 6.178375308s • [SLOW TEST:16.826 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 12:58:27.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-82d6ca41-0f3c-4304-97f3-42daaa70160d STEP: Creating a pod to test consume secrets Dec 17 12:58:27.418: INFO: Waiting up to 5m0s for pod "pod-secrets-94bc46ef-ae40-4964-97e0-902cd83bcc2b" in namespace "secrets-123" to be "success or failure" Dec 17 12:58:27.475: INFO: Pod "pod-secrets-94bc46ef-ae40-4964-97e0-902cd83bcc2b": Phase="Pending", Reason="", readiness=false. Elapsed: 56.48417ms Dec 17 12:58:29.490: INFO: Pod "pod-secrets-94bc46ef-ae40-4964-97e0-902cd83bcc2b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071784908s Dec 17 12:58:31.497: INFO: Pod "pod-secrets-94bc46ef-ae40-4964-97e0-902cd83bcc2b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.078462337s Dec 17 12:58:33.503: INFO: Pod "pod-secrets-94bc46ef-ae40-4964-97e0-902cd83bcc2b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.085346885s Dec 17 12:58:35.777: INFO: Pod "pod-secrets-94bc46ef-ae40-4964-97e0-902cd83bcc2b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.358852259s Dec 17 12:58:37.809: INFO: Pod "pod-secrets-94bc46ef-ae40-4964-97e0-902cd83bcc2b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.391329045s STEP: Saw pod success Dec 17 12:58:37.810: INFO: Pod "pod-secrets-94bc46ef-ae40-4964-97e0-902cd83bcc2b" satisfied condition "success or failure" Dec 17 12:58:37.820: INFO: Trying to get logs from node iruya-node pod pod-secrets-94bc46ef-ae40-4964-97e0-902cd83bcc2b container secret-volume-test: STEP: delete the pod Dec 17 12:58:37.927: INFO: Waiting for pod pod-secrets-94bc46ef-ae40-4964-97e0-902cd83bcc2b to disappear Dec 17 12:58:37.941: INFO: Pod pod-secrets-94bc46ef-ae40-4964-97e0-902cd83bcc2b no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 12:58:37.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-123" for this suite. Dec 17 12:58:43.982: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 12:58:44.054: INFO: namespace secrets-123 deletion completed in 6.098926154s • [SLOW TEST:16.777 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 12:58:44.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 17 12:58:44.248: INFO: Waiting up to 5m0s for pod "downwardapi-volume-090f26d0-0c47-4f1f-bb44-d1f09e624a11" in namespace "projected-8763" to be "success or failure" Dec 17 12:58:44.262: INFO: Pod "downwardapi-volume-090f26d0-0c47-4f1f-bb44-d1f09e624a11": Phase="Pending", Reason="", readiness=false. Elapsed: 14.187021ms Dec 17 12:58:46.278: INFO: Pod "downwardapi-volume-090f26d0-0c47-4f1f-bb44-d1f09e624a11": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029882004s Dec 17 12:58:48.291: INFO: Pod "downwardapi-volume-090f26d0-0c47-4f1f-bb44-d1f09e624a11": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043029614s Dec 17 12:58:50.303: INFO: Pod "downwardapi-volume-090f26d0-0c47-4f1f-bb44-d1f09e624a11": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055699419s Dec 17 12:58:52.313: INFO: Pod "downwardapi-volume-090f26d0-0c47-4f1f-bb44-d1f09e624a11": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.065237083s STEP: Saw pod success Dec 17 12:58:52.313: INFO: Pod "downwardapi-volume-090f26d0-0c47-4f1f-bb44-d1f09e624a11" satisfied condition "success or failure" Dec 17 12:58:52.316: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-090f26d0-0c47-4f1f-bb44-d1f09e624a11 container client-container: STEP: delete the pod Dec 17 12:58:52.447: INFO: Waiting for pod downwardapi-volume-090f26d0-0c47-4f1f-bb44-d1f09e624a11 to disappear Dec 17 12:58:52.459: INFO: Pod downwardapi-volume-090f26d0-0c47-4f1f-bb44-d1f09e624a11 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 12:58:52.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8763" for this suite. Dec 17 12:58:58.500: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 12:58:58.639: INFO: namespace projected-8763 deletion completed in 6.171303688s • [SLOW TEST:14.585 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 12:58:58.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-36ee042e-89cb-49e6-96fa-f9239751e140 STEP: Creating a pod to test consume configMaps Dec 17 12:58:58.826: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-aac4a5b1-eb4d-4cc1-a962-43986cd5765c" in namespace "projected-4943" to be "success or failure" Dec 17 12:58:58.837: INFO: Pod "pod-projected-configmaps-aac4a5b1-eb4d-4cc1-a962-43986cd5765c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.808574ms Dec 17 12:59:00.847: INFO: Pod "pod-projected-configmaps-aac4a5b1-eb4d-4cc1-a962-43986cd5765c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02062251s Dec 17 12:59:02.865: INFO: Pod "pod-projected-configmaps-aac4a5b1-eb4d-4cc1-a962-43986cd5765c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038722866s Dec 17 12:59:04.883: INFO: Pod "pod-projected-configmaps-aac4a5b1-eb4d-4cc1-a962-43986cd5765c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057364113s Dec 17 12:59:07.008: INFO: Pod "pod-projected-configmaps-aac4a5b1-eb4d-4cc1-a962-43986cd5765c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.181769475s Dec 17 12:59:09.015: INFO: Pod "pod-projected-configmaps-aac4a5b1-eb4d-4cc1-a962-43986cd5765c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.188981986s STEP: Saw pod success Dec 17 12:59:09.015: INFO: Pod "pod-projected-configmaps-aac4a5b1-eb4d-4cc1-a962-43986cd5765c" satisfied condition "success or failure" Dec 17 12:59:09.018: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-aac4a5b1-eb4d-4cc1-a962-43986cd5765c container projected-configmap-volume-test: STEP: delete the pod Dec 17 12:59:09.096: INFO: Waiting for pod pod-projected-configmaps-aac4a5b1-eb4d-4cc1-a962-43986cd5765c to disappear Dec 17 12:59:09.185: INFO: Pod pod-projected-configmaps-aac4a5b1-eb4d-4cc1-a962-43986cd5765c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 12:59:09.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4943" for this suite. Dec 17 12:59:15.224: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 12:59:15.365: INFO: namespace projected-4943 deletion completed in 6.1706363s • [SLOW TEST:16.725 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 12:59:15.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Dec 17 12:59:15.597: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 12:59:31.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3189" for this suite. Dec 17 12:59:37.554: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 12:59:37.693: INFO: namespace pods-3189 deletion completed in 6.191742396s • [SLOW TEST:22.328 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 12:59:37.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Dec 17 12:59:47.003: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 12:59:47.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6209" for this suite. Dec 17 12:59:53.217: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 12:59:53.467: INFO: namespace container-runtime-6209 deletion completed in 6.298070395s • [SLOW TEST:15.773 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 12:59:53.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-projected-cnph STEP: Creating a pod to test atomic-volume-subpath Dec 17 12:59:53.662: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-cnph" in namespace "subpath-5294" to be "success or failure" Dec 17 12:59:53.671: INFO: Pod "pod-subpath-test-projected-cnph": Phase="Pending", Reason="", readiness=false. Elapsed: 8.655285ms Dec 17 12:59:55.686: INFO: Pod "pod-subpath-test-projected-cnph": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024398766s Dec 17 12:59:57.694: INFO: Pod "pod-subpath-test-projected-cnph": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03258791s Dec 17 12:59:59.708: INFO: Pod "pod-subpath-test-projected-cnph": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046307958s Dec 17 13:00:01.725: INFO: Pod "pod-subpath-test-projected-cnph": Phase="Running", Reason="", readiness=true. Elapsed: 8.063375386s Dec 17 13:00:03.733: INFO: Pod "pod-subpath-test-projected-cnph": Phase="Running", Reason="", readiness=true. Elapsed: 10.071633274s Dec 17 13:00:05.754: INFO: Pod "pod-subpath-test-projected-cnph": Phase="Running", Reason="", readiness=true. Elapsed: 12.092640506s Dec 17 13:00:07.761: INFO: Pod "pod-subpath-test-projected-cnph": Phase="Running", Reason="", readiness=true. Elapsed: 14.099097416s Dec 17 13:00:09.776: INFO: Pod "pod-subpath-test-projected-cnph": Phase="Running", Reason="", readiness=true. Elapsed: 16.114187427s Dec 17 13:00:11.800: INFO: Pod "pod-subpath-test-projected-cnph": Phase="Running", Reason="", readiness=true. Elapsed: 18.138296166s Dec 17 13:00:13.867: INFO: Pod "pod-subpath-test-projected-cnph": Phase="Running", Reason="", readiness=true. Elapsed: 20.205201526s Dec 17 13:00:15.947: INFO: Pod "pod-subpath-test-projected-cnph": Phase="Running", Reason="", readiness=true. Elapsed: 22.285215313s Dec 17 13:00:17.955: INFO: Pod "pod-subpath-test-projected-cnph": Phase="Running", Reason="", readiness=true. Elapsed: 24.293305005s Dec 17 13:00:19.969: INFO: Pod "pod-subpath-test-projected-cnph": Phase="Running", Reason="", readiness=true. Elapsed: 26.307172662s Dec 17 13:00:21.988: INFO: Pod "pod-subpath-test-projected-cnph": Phase="Running", Reason="", readiness=true. Elapsed: 28.325808839s Dec 17 13:00:23.997: INFO: Pod "pod-subpath-test-projected-cnph": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.335156312s STEP: Saw pod success Dec 17 13:00:23.997: INFO: Pod "pod-subpath-test-projected-cnph" satisfied condition "success or failure" Dec 17 13:00:24.001: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-projected-cnph container test-container-subpath-projected-cnph: STEP: delete the pod Dec 17 13:00:24.112: INFO: Waiting for pod pod-subpath-test-projected-cnph to disappear Dec 17 13:00:24.117: INFO: Pod pod-subpath-test-projected-cnph no longer exists STEP: Deleting pod pod-subpath-test-projected-cnph Dec 17 13:00:24.118: INFO: Deleting pod "pod-subpath-test-projected-cnph" in namespace "subpath-5294" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 13:00:24.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5294" for this suite. Dec 17 13:00:30.143: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 13:00:30.272: INFO: namespace subpath-5294 deletion completed in 6.148285775s • [SLOW TEST:36.804 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 13:00:30.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Dec 17 13:00:30.400: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 13:00:45.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1428" for this suite. Dec 17 13:00:51.402: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 13:00:51.565: INFO: namespace init-container-1428 deletion completed in 6.193805646s • [SLOW TEST:21.292 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 13:00:51.566: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 17 13:00:51.625: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 13:00:52.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2574" for this suite. Dec 17 13:00:58.733: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 13:00:58.891: INFO: namespace custom-resource-definition-2574 deletion completed in 6.1812403s • [SLOW TEST:7.325 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 13:00:58.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on tmpfs Dec 17 13:00:58.995: INFO: Waiting up to 5m0s for pod "pod-d8c52dde-e274-4840-be3c-cf4979ff2920" in namespace "emptydir-4402" to be "success or failure" Dec 17 13:00:59.030: INFO: Pod "pod-d8c52dde-e274-4840-be3c-cf4979ff2920": Phase="Pending", Reason="", readiness=false. Elapsed: 34.764731ms Dec 17 13:01:01.040: INFO: Pod "pod-d8c52dde-e274-4840-be3c-cf4979ff2920": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044635284s Dec 17 13:01:03.052: INFO: Pod "pod-d8c52dde-e274-4840-be3c-cf4979ff2920": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056599502s Dec 17 13:01:05.063: INFO: Pod "pod-d8c52dde-e274-4840-be3c-cf4979ff2920": Phase="Pending", Reason="", readiness=false. Elapsed: 6.067681053s Dec 17 13:01:07.072: INFO: Pod "pod-d8c52dde-e274-4840-be3c-cf4979ff2920": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.076965631s STEP: Saw pod success Dec 17 13:01:07.072: INFO: Pod "pod-d8c52dde-e274-4840-be3c-cf4979ff2920" satisfied condition "success or failure" Dec 17 13:01:07.077: INFO: Trying to get logs from node iruya-node pod pod-d8c52dde-e274-4840-be3c-cf4979ff2920 container test-container: STEP: delete the pod Dec 17 13:01:07.145: INFO: Waiting for pod pod-d8c52dde-e274-4840-be3c-cf4979ff2920 to disappear Dec 17 13:01:07.156: INFO: Pod pod-d8c52dde-e274-4840-be3c-cf4979ff2920 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 13:01:07.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4402" for this suite. Dec 17 13:01:13.192: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 13:01:13.311: INFO: namespace emptydir-4402 deletion completed in 6.146429304s • [SLOW TEST:14.420 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 13:01:13.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 17 13:01:23.662: INFO: Waiting up to 5m0s for pod "client-envvars-f0736c58-f156-4f83-9748-5a362cd6304f" in namespace "pods-7469" to be "success or failure" Dec 17 13:01:23.688: INFO: Pod "client-envvars-f0736c58-f156-4f83-9748-5a362cd6304f": Phase="Pending", Reason="", readiness=false. Elapsed: 25.756022ms Dec 17 13:01:25.698: INFO: Pod "client-envvars-f0736c58-f156-4f83-9748-5a362cd6304f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035805952s Dec 17 13:01:27.705: INFO: Pod "client-envvars-f0736c58-f156-4f83-9748-5a362cd6304f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042754063s Dec 17 13:01:29.719: INFO: Pod "client-envvars-f0736c58-f156-4f83-9748-5a362cd6304f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056040951s Dec 17 13:01:31.728: INFO: Pod "client-envvars-f0736c58-f156-4f83-9748-5a362cd6304f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.065307813s STEP: Saw pod success Dec 17 13:01:31.728: INFO: Pod "client-envvars-f0736c58-f156-4f83-9748-5a362cd6304f" satisfied condition "success or failure" Dec 17 13:01:31.734: INFO: Trying to get logs from node iruya-node pod client-envvars-f0736c58-f156-4f83-9748-5a362cd6304f container env3cont: STEP: delete the pod Dec 17 13:01:31.915: INFO: Waiting for pod client-envvars-f0736c58-f156-4f83-9748-5a362cd6304f to disappear Dec 17 13:01:31.926: INFO: Pod client-envvars-f0736c58-f156-4f83-9748-5a362cd6304f no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 13:01:31.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7469" for this suite. Dec 17 13:02:18.026: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 13:02:18.110: INFO: namespace pods-7469 deletion completed in 46.173661985s • [SLOW TEST:64.798 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 13:02:18.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the initial replication controller Dec 17 13:02:18.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1508' Dec 17 13:02:18.621: INFO: stderr: "" Dec 17 13:02:18.621: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Dec 17 13:02:18.621: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1508' Dec 17 13:02:18.833: INFO: stderr: "" Dec 17 13:02:18.833: INFO: stdout: "update-demo-nautilus-4q5h6 update-demo-nautilus-vr227 " Dec 17 13:02:18.833: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4q5h6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1508' Dec 17 13:02:19.022: INFO: stderr: "" Dec 17 13:02:19.022: INFO: stdout: "" Dec 17 13:02:19.022: INFO: update-demo-nautilus-4q5h6 is created but not running Dec 17 13:02:24.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1508' Dec 17 13:02:24.844: INFO: stderr: "" Dec 17 13:02:24.844: INFO: stdout: "update-demo-nautilus-4q5h6 update-demo-nautilus-vr227 " Dec 17 13:02:24.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4q5h6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1508' Dec 17 13:02:25.347: INFO: stderr: "" Dec 17 13:02:25.347: INFO: stdout: "" Dec 17 13:02:25.347: INFO: update-demo-nautilus-4q5h6 is created but not running Dec 17 13:02:30.348: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1508' Dec 17 13:02:30.640: INFO: stderr: "" Dec 17 13:02:30.641: INFO: stdout: "update-demo-nautilus-4q5h6 update-demo-nautilus-vr227 " Dec 17 13:02:30.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4q5h6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1508' Dec 17 13:02:30.777: INFO: stderr: "" Dec 17 13:02:30.777: INFO: stdout: "true" Dec 17 13:02:30.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4q5h6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1508' Dec 17 13:02:31.100: INFO: stderr: "" Dec 17 13:02:31.100: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 17 13:02:31.100: INFO: validating pod update-demo-nautilus-4q5h6 Dec 17 13:02:31.126: INFO: got data: { "image": "nautilus.jpg" } Dec 17 13:02:31.126: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 17 13:02:31.127: INFO: update-demo-nautilus-4q5h6 is verified up and running Dec 17 13:02:31.127: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vr227 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1508' Dec 17 13:02:31.232: INFO: stderr: "" Dec 17 13:02:31.232: INFO: stdout: "true" Dec 17 13:02:31.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vr227 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1508' Dec 17 13:02:31.379: INFO: stderr: "" Dec 17 13:02:31.379: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 17 13:02:31.379: INFO: validating pod update-demo-nautilus-vr227 Dec 17 13:02:31.390: INFO: got data: { "image": "nautilus.jpg" } Dec 17 13:02:31.391: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 17 13:02:31.391: INFO: update-demo-nautilus-vr227 is verified up and running STEP: rolling-update to new replication controller Dec 17 13:02:31.393: INFO: scanned /root for discovery docs: Dec 17 13:02:31.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-1508' Dec 17 13:03:02.601: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Dec 17 13:03:02.602: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Dec 17 13:03:02.603: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1508' Dec 17 13:03:02.959: INFO: stderr: "" Dec 17 13:03:02.959: INFO: stdout: "update-demo-kitten-hmh52 update-demo-kitten-qlx6s update-demo-nautilus-vr227 " STEP: Replicas for name=update-demo: expected=2 actual=3 Dec 17 13:03:07.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1508' Dec 17 13:03:08.193: INFO: stderr: "" Dec 17 13:03:08.194: INFO: stdout: "update-demo-kitten-hmh52 update-demo-kitten-qlx6s " Dec 17 13:03:08.194: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-hmh52 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1508' Dec 17 13:03:08.357: INFO: stderr: "" Dec 17 13:03:08.358: INFO: stdout: "true" Dec 17 13:03:08.358: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-hmh52 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1508' Dec 17 13:03:08.509: INFO: stderr: "" Dec 17 13:03:08.509: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Dec 17 13:03:08.509: INFO: validating pod update-demo-kitten-hmh52 Dec 17 13:03:08.549: INFO: got data: { "image": "kitten.jpg" } Dec 17 13:03:08.549: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Dec 17 13:03:08.549: INFO: update-demo-kitten-hmh52 is verified up and running Dec 17 13:03:08.549: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-qlx6s -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1508' Dec 17 13:03:08.712: INFO: stderr: "" Dec 17 13:03:08.712: INFO: stdout: "true" Dec 17 13:03:08.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-qlx6s -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1508' Dec 17 13:03:08.859: INFO: stderr: "" Dec 17 13:03:08.859: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Dec 17 13:03:08.859: INFO: validating pod update-demo-kitten-qlx6s Dec 17 13:03:08.884: INFO: got data: { "image": "kitten.jpg" } Dec 17 13:03:08.884: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Dec 17 13:03:08.884: INFO: update-demo-kitten-qlx6s is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 13:03:08.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1508" for this suite. Dec 17 13:03:36.911: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 13:03:37.119: INFO: namespace kubectl-1508 deletion completed in 28.230317746s • [SLOW TEST:79.008 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 13:03:37.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Dec 17 13:03:50.344: INFO: Successfully updated pod "pod-update-activedeadlineseconds-1192695a-5863-4a58-9fdd-3baa43c2fcab" Dec 17 13:03:50.344: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-1192695a-5863-4a58-9fdd-3baa43c2fcab" in namespace "pods-6874" to be "terminated due to deadline exceeded" Dec 17 13:03:50.369: INFO: Pod "pod-update-activedeadlineseconds-1192695a-5863-4a58-9fdd-3baa43c2fcab": Phase="Running", Reason="", readiness=true. Elapsed: 24.587873ms Dec 17 13:03:52.380: INFO: Pod "pod-update-activedeadlineseconds-1192695a-5863-4a58-9fdd-3baa43c2fcab": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.035015244s Dec 17 13:03:52.380: INFO: Pod "pod-update-activedeadlineseconds-1192695a-5863-4a58-9fdd-3baa43c2fcab" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 13:03:52.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6874" for this suite. Dec 17 13:03:58.437: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 13:03:58.554: INFO: namespace pods-6874 deletion completed in 6.164853754s • [SLOW TEST:21.435 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 13:03:58.556: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Dec 17 13:03:58.718: INFO: PodSpec: initContainers in spec.initContainers Dec 17 13:05:04.135: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-12cbaa36-251a-4ea2-993f-9b0dee8069b4", GenerateName:"", Namespace:"init-container-4781", SelfLink:"/api/v1/namespaces/init-container-4781/pods/pod-init-12cbaa36-251a-4ea2-993f-9b0dee8069b4", UID:"4c061686-3c0a-4bd9-a72c-94dedad84713", ResourceVersion:"17010143", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63712184638, loc:(*time.Location)(0x7ea48a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"718572235"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-77ddk", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002316580), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-77ddk", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-77ddk", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-77ddk", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00149a868), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002ac0300), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00149a8f0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00149a910)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00149a918), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00149a91c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712184638, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712184638, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712184638, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712184638, loc:(*time.Location)(0x7ea48a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.3.65", PodIP:"10.44.0.1", StartTime:(*v1.Time)(0xc0020d7840), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001c4a770)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001c4a7e0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://27490d789a37b358c7a04ab491274a0d354e491e38ee31721094f4f3166ca13f"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0020d7880), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0020d7860), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 13:05:04.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4781" for this suite. Dec 17 13:05:26.251: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 13:05:26.415: INFO: namespace init-container-4781 deletion completed in 22.198228865s • [SLOW TEST:87.860 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 13:05:26.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting the proxy server Dec 17 13:05:26.496: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 13:05:26.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-340" for this suite. Dec 17 13:05:32.716: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 13:05:32.836: INFO: namespace kubectl-340 deletion completed in 6.155390799s • [SLOW TEST:6.420 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 13:05:32.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-41407f8d-113f-4936-b1e6-518ae7287ee5 in namespace container-probe-97 Dec 17 13:05:45.019: INFO: Started pod liveness-41407f8d-113f-4936-b1e6-518ae7287ee5 in namespace container-probe-97 STEP: checking the pod's current state and verifying that restartCount is present Dec 17 13:05:45.023: INFO: Initial restart count of pod liveness-41407f8d-113f-4936-b1e6-518ae7287ee5 is 0 Dec 17 13:06:01.120: INFO: Restart count of pod container-probe-97/liveness-41407f8d-113f-4936-b1e6-518ae7287ee5 is now 1 (16.096822325s elapsed) Dec 17 13:06:21.262: INFO: Restart count of pod container-probe-97/liveness-41407f8d-113f-4936-b1e6-518ae7287ee5 is now 2 (36.239005937s elapsed) Dec 17 13:06:41.380: INFO: Restart count of pod container-probe-97/liveness-41407f8d-113f-4936-b1e6-518ae7287ee5 is now 3 (56.356492834s elapsed) Dec 17 13:07:01.667: INFO: Restart count of pod container-probe-97/liveness-41407f8d-113f-4936-b1e6-518ae7287ee5 is now 4 (1m16.644165176s elapsed) Dec 17 13:08:08.365: INFO: Restart count of pod container-probe-97/liveness-41407f8d-113f-4936-b1e6-518ae7287ee5 is now 5 (2m23.341399346s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 13:08:08.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-97" for this suite. Dec 17 13:08:14.682: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 13:08:14.808: INFO: namespace container-probe-97 deletion completed in 6.214706256s • [SLOW TEST:161.970 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 13:08:14.809: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-5082c9ca-a50b-45cd-9dea-3949cd3c3555 STEP: Creating a pod to test consume configMaps Dec 17 13:08:15.044: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9d6d93a2-6b62-4077-afaf-556c5739857e" in namespace "projected-3103" to be "success or failure" Dec 17 13:08:15.145: INFO: Pod "pod-projected-configmaps-9d6d93a2-6b62-4077-afaf-556c5739857e": Phase="Pending", Reason="", readiness=false. Elapsed: 101.587263ms Dec 17 13:08:17.168: INFO: Pod "pod-projected-configmaps-9d6d93a2-6b62-4077-afaf-556c5739857e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.123727087s Dec 17 13:08:19.186: INFO: Pod "pod-projected-configmaps-9d6d93a2-6b62-4077-afaf-556c5739857e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.142032433s Dec 17 13:08:21.196: INFO: Pod "pod-projected-configmaps-9d6d93a2-6b62-4077-afaf-556c5739857e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.151880977s Dec 17 13:08:23.201: INFO: Pod "pod-projected-configmaps-9d6d93a2-6b62-4077-afaf-556c5739857e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.157562618s Dec 17 13:08:25.209: INFO: Pod "pod-projected-configmaps-9d6d93a2-6b62-4077-afaf-556c5739857e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.165023363s Dec 17 13:08:27.229: INFO: Pod "pod-projected-configmaps-9d6d93a2-6b62-4077-afaf-556c5739857e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.184898644s STEP: Saw pod success Dec 17 13:08:27.229: INFO: Pod "pod-projected-configmaps-9d6d93a2-6b62-4077-afaf-556c5739857e" satisfied condition "success or failure" Dec 17 13:08:27.241: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-9d6d93a2-6b62-4077-afaf-556c5739857e container projected-configmap-volume-test: STEP: delete the pod Dec 17 13:08:27.353: INFO: Waiting for pod pod-projected-configmaps-9d6d93a2-6b62-4077-afaf-556c5739857e to disappear Dec 17 13:08:27.365: INFO: Pod pod-projected-configmaps-9d6d93a2-6b62-4077-afaf-556c5739857e no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 13:08:27.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3103" for this suite. Dec 17 13:08:33.384: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 13:08:33.536: INFO: namespace projected-3103 deletion completed in 6.167656565s • [SLOW TEST:18.727 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 13:08:33.537: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-5246/configmap-test-6e8c35cc-02a4-4426-8645-095189bc6cb9 STEP: Creating a pod to test consume configMaps Dec 17 13:08:33.666: INFO: Waiting up to 5m0s for pod "pod-configmaps-6126f02a-ac66-403a-b023-8c288728ab3c" in namespace "configmap-5246" to be "success or failure" Dec 17 13:08:33.687: INFO: Pod "pod-configmaps-6126f02a-ac66-403a-b023-8c288728ab3c": Phase="Pending", Reason="", readiness=false. Elapsed: 20.279032ms Dec 17 13:08:35.697: INFO: Pod "pod-configmaps-6126f02a-ac66-403a-b023-8c288728ab3c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030456339s Dec 17 13:08:37.709: INFO: Pod "pod-configmaps-6126f02a-ac66-403a-b023-8c288728ab3c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042807259s Dec 17 13:08:39.719: INFO: Pod "pod-configmaps-6126f02a-ac66-403a-b023-8c288728ab3c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052338668s Dec 17 13:08:41.725: INFO: Pod "pod-configmaps-6126f02a-ac66-403a-b023-8c288728ab3c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.058320981s Dec 17 13:08:43.737: INFO: Pod "pod-configmaps-6126f02a-ac66-403a-b023-8c288728ab3c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.07103374s STEP: Saw pod success Dec 17 13:08:43.738: INFO: Pod "pod-configmaps-6126f02a-ac66-403a-b023-8c288728ab3c" satisfied condition "success or failure" Dec 17 13:08:43.743: INFO: Trying to get logs from node iruya-node pod pod-configmaps-6126f02a-ac66-403a-b023-8c288728ab3c container env-test: STEP: delete the pod Dec 17 13:08:43.825: INFO: Waiting for pod pod-configmaps-6126f02a-ac66-403a-b023-8c288728ab3c to disappear Dec 17 13:08:43.920: INFO: Pod pod-configmaps-6126f02a-ac66-403a-b023-8c288728ab3c no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 13:08:43.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5246" for this suite. Dec 17 13:08:49.968: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 13:08:50.057: INFO: namespace configmap-5246 deletion completed in 6.120267049s • [SLOW TEST:16.520 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 13:08:50.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Dec 17 13:08:50.148: INFO: Waiting up to 5m0s for pod "pod-0a77725e-a97f-477b-8fb6-07bf24ece580" in namespace "emptydir-4118" to be "success or failure" Dec 17 13:08:50.230: INFO: Pod "pod-0a77725e-a97f-477b-8fb6-07bf24ece580": Phase="Pending", Reason="", readiness=false. Elapsed: 81.463831ms Dec 17 13:08:52.241: INFO: Pod "pod-0a77725e-a97f-477b-8fb6-07bf24ece580": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092481796s Dec 17 13:08:54.254: INFO: Pod "pod-0a77725e-a97f-477b-8fb6-07bf24ece580": Phase="Pending", Reason="", readiness=false. Elapsed: 4.10589266s Dec 17 13:08:56.262: INFO: Pod "pod-0a77725e-a97f-477b-8fb6-07bf24ece580": Phase="Pending", Reason="", readiness=false. Elapsed: 6.113727886s Dec 17 13:08:58.276: INFO: Pod "pod-0a77725e-a97f-477b-8fb6-07bf24ece580": Phase="Pending", Reason="", readiness=false. Elapsed: 8.12796875s Dec 17 13:09:00.294: INFO: Pod "pod-0a77725e-a97f-477b-8fb6-07bf24ece580": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.145664768s STEP: Saw pod success Dec 17 13:09:00.294: INFO: Pod "pod-0a77725e-a97f-477b-8fb6-07bf24ece580" satisfied condition "success or failure" Dec 17 13:09:00.303: INFO: Trying to get logs from node iruya-node pod pod-0a77725e-a97f-477b-8fb6-07bf24ece580 container test-container: STEP: delete the pod Dec 17 13:09:00.381: INFO: Waiting for pod pod-0a77725e-a97f-477b-8fb6-07bf24ece580 to disappear Dec 17 13:09:00.407: INFO: Pod pod-0a77725e-a97f-477b-8fb6-07bf24ece580 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 13:09:00.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4118" for this suite. Dec 17 13:09:06.456: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 13:09:06.570: INFO: namespace emptydir-4118 deletion completed in 6.148571214s • [SLOW TEST:16.512 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 13:09:06.570: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with configMap that has name projected-configmap-test-upd-9f58d680-3bcd-47ab-9fc5-6e21aa4315f9 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-9f58d680-3bcd-47ab-9fc5-6e21aa4315f9 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 13:09:18.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-419" for this suite. Dec 17 13:09:41.642: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 13:09:41.788: INFO: namespace projected-419 deletion completed in 22.809856419s • [SLOW TEST:35.218 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 13:09:41.789: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-2076 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-2076 STEP: Creating statefulset with conflicting port in namespace statefulset-2076 STEP: Waiting until pod test-pod will start running in namespace statefulset-2076 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-2076 Dec 17 13:09:52.048: INFO: Observed stateful pod in namespace: statefulset-2076, name: ss-0, uid: 5b0cb573-d5c7-42a8-8eb1-da16cf766410, status phase: Pending. Waiting for statefulset controller to delete. Dec 17 13:14:52.048: INFO: Pod ss-0 expected to be re-created at least once [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Dec 17 13:14:52.068: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe po ss-0 --namespace=statefulset-2076' Dec 17 13:14:54.920: INFO: stderr: "" Dec 17 13:14:54.920: INFO: stdout: "Name: ss-0\nNamespace: statefulset-2076\nPriority: 0\nNode: iruya-node/\nLabels: baz=blah\n controller-revision-hash=ss-6f98bdb9c4\n foo=bar\n statefulset.kubernetes.io/pod-name=ss-0\nAnnotations: \nStatus: Pending\nIP: \nControlled By: StatefulSet/ss\nContainers:\n nginx:\n Image: docker.io/library/nginx:1.14-alpine\n Port: 21017/TCP\n Host Port: 21017/TCP\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-8vtgx (ro)\nVolumes:\n default-token-8vtgx:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-8vtgx\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Warning PodFitsHostPorts 5m3s kubelet, iruya-node Predicate PodFitsHostPorts failed\n" Dec 17 13:14:54.920: INFO: Output of kubectl describe ss-0: Name: ss-0 Namespace: statefulset-2076 Priority: 0 Node: iruya-node/ Labels: baz=blah controller-revision-hash=ss-6f98bdb9c4 foo=bar statefulset.kubernetes.io/pod-name=ss-0 Annotations: Status: Pending IP: Controlled By: StatefulSet/ss Containers: nginx: Image: docker.io/library/nginx:1.14-alpine Port: 21017/TCP Host Port: 21017/TCP Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-8vtgx (ro) Volumes: default-token-8vtgx: Type: Secret (a volume populated by a Secret) SecretName: default-token-8vtgx Optional: false QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning PodFitsHostPorts 5m3s kubelet, iruya-node Predicate PodFitsHostPorts failed Dec 17 13:14:54.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs ss-0 --namespace=statefulset-2076 --tail=100' Dec 17 13:14:55.128: INFO: rc: 1 Dec 17 13:14:55.128: INFO: Last 100 log lines of ss-0: Dec 17 13:14:55.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe po test-pod --namespace=statefulset-2076' Dec 17 13:14:55.231: INFO: stderr: "" Dec 17 13:14:55.231: INFO: stdout: "Name: test-pod\nNamespace: statefulset-2076\nPriority: 0\nNode: iruya-node/10.96.3.65\nStart Time: Tue, 17 Dec 2019 13:09:42 +0000\nLabels: \nAnnotations: \nStatus: Running\nIP: 10.44.0.1\nContainers:\n nginx:\n Container ID: docker://5c845bedbcb2d000bbaaaa75a4d2ef71437bb9cf00a706230755f2f802a777df\n Image: docker.io/library/nginx:1.14-alpine\n Image ID: docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\n Port: 21017/TCP\n Host Port: 21017/TCP\n State: Running\n Started: Tue, 17 Dec 2019 13:09:51 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-8vtgx (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-8vtgx:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-8vtgx\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Pulled 5m9s kubelet, iruya-node Container image \"docker.io/library/nginx:1.14-alpine\" already present on machine\n Normal Created 5m5s kubelet, iruya-node Created container nginx\n Normal Started 5m4s kubelet, iruya-node Started container nginx\n" Dec 17 13:14:55.231: INFO: Output of kubectl describe test-pod: Name: test-pod Namespace: statefulset-2076 Priority: 0 Node: iruya-node/10.96.3.65 Start Time: Tue, 17 Dec 2019 13:09:42 +0000 Labels: Annotations: Status: Running IP: 10.44.0.1 Containers: nginx: Container ID: docker://5c845bedbcb2d000bbaaaa75a4d2ef71437bb9cf00a706230755f2f802a777df Image: docker.io/library/nginx:1.14-alpine Image ID: docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 Port: 21017/TCP Host Port: 21017/TCP State: Running Started: Tue, 17 Dec 2019 13:09:51 +0000 Ready: True Restart Count: 0 Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-8vtgx (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: default-token-8vtgx: Type: Secret (a volume populated by a Secret) SecretName: default-token-8vtgx Optional: false QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Pulled 5m9s kubelet, iruya-node Container image "docker.io/library/nginx:1.14-alpine" already present on machine Normal Created 5m5s kubelet, iruya-node Created container nginx Normal Started 5m4s kubelet, iruya-node Started container nginx Dec 17 13:14:55.231: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs test-pod --namespace=statefulset-2076 --tail=100' Dec 17 13:14:55.620: INFO: stderr: "" Dec 17 13:14:55.620: INFO: stdout: "" Dec 17 13:14:55.620: INFO: Last 100 log lines of test-pod: Dec 17 13:14:55.620: INFO: Deleting all statefulset in ns statefulset-2076 Dec 17 13:14:55.640: INFO: Scaling statefulset ss to 0 Dec 17 13:15:05.693: INFO: Waiting for statefulset status.replicas updated to 0 Dec 17 13:15:05.797: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Collecting events from namespace "statefulset-2076". STEP: Found 16 events. Dec 17 13:15:05.849: INFO: At 2019-12-17 13:09:42 +0000 UTC - event for ss: {statefulset-controller } FailedCreate: create Pod ss-0 in StatefulSet ss failed error: The POST operation against Pod could not be completed at this time, please try again. Dec 17 13:15:05.849: INFO: At 2019-12-17 13:09:42 +0000 UTC - event for ss: {statefulset-controller } SuccessfulCreate: create Pod ss-0 in StatefulSet ss successful Dec 17 13:15:05.850: INFO: At 2019-12-17 13:09:42 +0000 UTC - event for ss: {statefulset-controller } SuccessfulDelete: delete Pod ss-0 in StatefulSet ss successful Dec 17 13:15:05.850: INFO: At 2019-12-17 13:09:42 +0000 UTC - event for ss: {statefulset-controller } RecreatingFailedPod: StatefulSet statefulset-2076/ss is recreating failed Pod ss-0 Dec 17 13:15:05.850: INFO: At 2019-12-17 13:09:42 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed Dec 17 13:15:05.850: INFO: At 2019-12-17 13:09:42 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed Dec 17 13:15:05.850: INFO: At 2019-12-17 13:09:42 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed Dec 17 13:15:05.850: INFO: At 2019-12-17 13:09:46 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed Dec 17 13:15:05.850: INFO: At 2019-12-17 13:09:46 +0000 UTC - event for test-pod: {kubelet iruya-node} Pulled: Container image "docker.io/library/nginx:1.14-alpine" already present on machine Dec 17 13:15:05.850: INFO: At 2019-12-17 13:09:48 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed Dec 17 13:15:05.850: INFO: At 2019-12-17 13:09:49 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed Dec 17 13:15:05.850: INFO: At 2019-12-17 13:09:50 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed Dec 17 13:15:05.850: INFO: At 2019-12-17 13:09:50 +0000 UTC - event for test-pod: {kubelet iruya-node} Created: Created container nginx Dec 17 13:15:05.850: INFO: At 2019-12-17 13:09:51 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed Dec 17 13:15:05.850: INFO: At 2019-12-17 13:09:51 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed Dec 17 13:15:05.850: INFO: At 2019-12-17 13:09:51 +0000 UTC - event for test-pod: {kubelet iruya-node} Started: Started container nginx Dec 17 13:15:05.858: INFO: POD NODE PHASE GRACE CONDITIONS Dec 17 13:15:05.858: INFO: test-pod iruya-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:09:42 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:09:51 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:09:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:09:42 +0000 UTC }] Dec 17 13:15:05.858: INFO: Dec 17 13:15:05.867: INFO: Logging node info for node iruya-node Dec 17 13:15:05.877: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:iruya-node,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/iruya-node,UID:b2aa273d-23ea-4c86-9e2f-72569e3392bd,ResourceVersion:17011118,Generation:0,CreationTimestamp:2019-08-04 09:01:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/arch: amd64,kubernetes.io/hostname: iruya-node,kubernetes.io/os: linux,},Annotations:map[string]string{kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.96.1.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 2019-10-12 11:56:49 +0000 UTC 2019-10-12 11:56:49 +0000 UTC WeaveIsUp Weave pod has set this} {MemoryPressure False 2019-12-17 13:14:05 +0000 UTC 2019-08-04 09:01:39 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-12-17 13:14:05 +0000 UTC 2019-08-04 09:01:39 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-12-17 13:14:05 +0000 UTC 2019-08-04 09:01:39 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-12-17 13:14:05 +0000 UTC 2019-08-04 09:02:19 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 10.96.3.65} {Hostname iruya-node}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f573dcf04d6f4a87856a35d266a2fa7a,SystemUUID:F573DCF0-4D6F-4A87-856A-35D266A2FA7A,BootID:8baf4beb-8391-43e6-b17b-b1e184b5370a,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.15.1,KubeProxyVersion:v1.15.1,OperatingSystem:linux,Architecture:amd64,},Images:[{[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6] 373099368} {[k8s.gcr.io/etcd@sha256:17da501f5d2a675be46040422a27b7cc21b8a43895ac998b171db1c346f361f7 k8s.gcr.io/etcd:3.3.10] 258116302} {[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15] 246640776} {[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0] 195659796} {[weaveworks/weave-kube@sha256:8fea236b8e64192c454e459b40381bd48795bd54d791fa684d818afdc12bd100 weaveworks/weave-kube:2.5.2] 148150868} {[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine] 126894770} {[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine] 123781643} {[gcr.io/google-samples/gb-redisslave@sha256:57730a481f97b3321138161ba2c8c9ca3b32df32ce9180e4029e6940446800ec gcr.io/google-samples/gb-redisslave:v3] 98945667} {[k8s.gcr.io/kube-proxy@sha256:08186f4897488e96cb098dd8d1d931af9a6ea718bb8737bf44bb76e42075f0ce k8s.gcr.io/kube-proxy:v1.15.1] 82408284} {[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10] 61365829} {[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6] 57345321} {[weaveworks/weave-npc@sha256:56c93a359d54107558720a2859b83cb28a31c70c82a1aaa3dc4704e6c62e3b15 weaveworks/weave-npc:2.5.2] 49569458} {[redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858 redis:5.0.5-alpine] 29331594} {[gcr.io/kubernetes-e2e-test-images/nettest@sha256:6aa91bc71993260a87513e31b672ec14ce84bc253cd5233406c6946d3a8f55a1 gcr.io/kubernetes-e2e-test-images/nettest:1.0] 27413498} {[nginx@sha256:57a226fb6ab6823027c0704a9346a890ffb0cacde06bc19bbc234c8720673555 nginx:1.15-alpine] 16087791} {[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine] 16032814} {[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0] 11443478} {[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1] 9349974} {[gcr.io/kubernetes-e2e-test-images/hostexec@sha256:90dfe59da029f9e536385037bc64e86cd3d6e55bae613ddbe69e554d79b0639d gcr.io/kubernetes-e2e-test-images/hostexec:1.1] 8490662} {[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0] 6757579} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 6705349} {[gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 gcr.io/kubernetes-e2e-test-images/redis:1.0] 5905732} {[gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1] 5851985} {[gcr.io/kubernetes-e2e-test-images/liveness@sha256:71c3fc838e0637df570497febafa0ee73bf47176dfd43612de5c55a71230674e gcr.io/kubernetes-e2e-test-images/liveness:1.1] 5829944} {[appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 appropriate/curl:latest] 5496756} {[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0] 4753501} {[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0] 4747037} {[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0] 4732240} {[gcr.io/kubernetes-e2e-test-images/porter@sha256:d6389405e453950618ae7749d9eee388f0eb32e0328a7e6583c41433aa5f2a77 gcr.io/kubernetes-e2e-test-images/porter:1.0] 4681408} {[gcr.io/kubernetes-e2e-test-images/entrypoint-tester@sha256:ba4681b5299884a3adca70fbde40638373b437a881055ffcd0935b5f43eb15c9 gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0] 2729534} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 1563521} {[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0] 1450451} {[busybox@sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e busybox:latest] 1219782} {[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29] 1154361} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472} {[kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 kubernetes/pause:latest] 239840}],VolumesInUse:[],VolumesAttached:[],Config:nil,},} Dec 17 13:15:05.878: INFO: Logging kubelet events for node iruya-node Dec 17 13:15:05.889: INFO: Logging pods the kubelet thinks is on node iruya-node Dec 17 13:15:05.909: INFO: test-pod started at 2019-12-17 13:09:42 +0000 UTC (0+1 container statuses recorded) Dec 17 13:15:05.909: INFO: Container nginx ready: true, restart count 0 Dec 17 13:15:05.909: INFO: kube-proxy-976zl started at 2019-08-04 09:01:39 +0000 UTC (0+1 container statuses recorded) Dec 17 13:15:05.909: INFO: Container kube-proxy ready: true, restart count 0 Dec 17 13:15:05.909: INFO: weave-net-rlp57 started at 2019-10-12 11:56:39 +0000 UTC (0+2 container statuses recorded) Dec 17 13:15:05.909: INFO: Container weave ready: true, restart count 0 Dec 17 13:15:05.909: INFO: Container weave-npc ready: true, restart count 0 W1217 13:15:05.947269 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Dec 17 13:15:06.007: INFO: Latency metrics for node iruya-node Dec 17 13:15:06.007: INFO: Logging node info for node iruya-server-sfge57q7djm7 Dec 17 13:15:06.035: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:iruya-server-sfge57q7djm7,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/iruya-server-sfge57q7djm7,UID:67f2a658-4743-4118-95e7-463a23bcd212,ResourceVersion:17011164,Generation:0,CreationTimestamp:2019-08-04 08:52:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/arch: amd64,kubernetes.io/hostname: iruya-server-sfge57q7djm7,kubernetes.io/os: linux,node-role.kubernetes.io/master: ,},Annotations:map[string]string{kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.96.0.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 2019-08-04 08:53:00 +0000 UTC 2019-08-04 08:53:00 +0000 UTC WeaveIsUp Weave pod has set this} {MemoryPressure False 2019-12-17 13:14:40 +0000 UTC 2019-08-04 08:52:04 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-12-17 13:14:40 +0000 UTC 2019-08-04 08:52:04 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-12-17 13:14:40 +0000 UTC 2019-08-04 08:52:04 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-12-17 13:14:40 +0000 UTC 2019-08-04 08:53:09 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 10.96.2.216} {Hostname iruya-server-sfge57q7djm7}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:78bacef342604a51913cae58dd95802b,SystemUUID:78BACEF3-4260-4A51-913C-AE58DD95802B,BootID:db143d3a-01b3-4483-b23e-e72adff2b28d,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.15.1,KubeProxyVersion:v1.15.1,OperatingSystem:linux,Architecture:amd64,},Images:[{[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6] 373099368} {[k8s.gcr.io/etcd@sha256:17da501f5d2a675be46040422a27b7cc21b8a43895ac998b171db1c346f361f7 k8s.gcr.io/etcd:3.3.10] 258116302} {[k8s.gcr.io/kube-apiserver@sha256:304a1c38707834062ee87df62ef329d52a8b9a3e70459565d0a396479073f54c k8s.gcr.io/kube-apiserver:v1.15.1] 206827454} {[k8s.gcr.io/kube-controller-manager@sha256:9abae95e428e228fe8f6d1630d55e79e018037460f3731312805c0f37471e4bf k8s.gcr.io/kube-controller-manager:v1.15.1] 158722622} {[weaveworks/weave-kube@sha256:8fea236b8e64192c454e459b40381bd48795bd54d791fa684d818afdc12bd100 weaveworks/weave-kube:2.5.2] 148150868} {[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine] 126894770} {[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine] 123781643} {[gcr.io/google-samples/gb-redisslave@sha256:57730a481f97b3321138161ba2c8c9ca3b32df32ce9180e4029e6940446800ec gcr.io/google-samples/gb-redisslave:v3] 98945667} {[k8s.gcr.io/kube-proxy@sha256:08186f4897488e96cb098dd8d1d931af9a6ea718bb8737bf44bb76e42075f0ce k8s.gcr.io/kube-proxy:v1.15.1] 82408284} {[k8s.gcr.io/kube-scheduler@sha256:d0ee18a9593013fbc44b1920e4930f29b664b59a3958749763cb33b57e0e8956 k8s.gcr.io/kube-scheduler:v1.15.1] 81107582} {[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6] 57345321} {[weaveworks/weave-npc@sha256:56c93a359d54107558720a2859b83cb28a31c70c82a1aaa3dc4704e6c62e3b15 weaveworks/weave-npc:2.5.2] 49569458} {[k8s.gcr.io/coredns@sha256:02382353821b12c21b062c59184e227e001079bb13ebd01f9d3270ba0fcbf1e4 k8s.gcr.io/coredns:1.3.1] 40303560} {[redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858 redis:5.0.5-alpine] 29331594} {[nginx@sha256:57a226fb6ab6823027c0704a9346a890ffb0cacde06bc19bbc234c8720673555 nginx:1.15-alpine] 16087791} {[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine] 16032814} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 6705349} {[gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 gcr.io/kubernetes-e2e-test-images/redis:1.0] 5905732} {[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0] 4753501} {[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0] 4747037} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 1563521} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472} {[kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 kubernetes/pause:latest] 239840}],VolumesInUse:[],VolumesAttached:[],Config:nil,},} Dec 17 13:15:06.036: INFO: Logging kubelet events for node iruya-server-sfge57q7djm7 Dec 17 13:15:06.042: INFO: Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 Dec 17 13:15:06.200: INFO: etcd-iruya-server-sfge57q7djm7 started at 2019-08-04 08:51:38 +0000 UTC (0+1 container statuses recorded) Dec 17 13:15:06.200: INFO: Container etcd ready: true, restart count 0 Dec 17 13:15:06.200: INFO: weave-net-bzl4d started at 2019-08-04 08:52:37 +0000 UTC (0+2 container statuses recorded) Dec 17 13:15:06.200: INFO: Container weave ready: true, restart count 0 Dec 17 13:15:06.200: INFO: Container weave-npc ready: true, restart count 0 Dec 17 13:15:06.200: INFO: coredns-5c98db65d4-bm4gs started at 2019-08-04 08:53:12 +0000 UTC (0+1 container statuses recorded) Dec 17 13:15:06.200: INFO: Container coredns ready: true, restart count 0 Dec 17 13:15:06.200: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 started at 2019-08-04 08:51:42 +0000 UTC (0+1 container statuses recorded) Dec 17 13:15:06.200: INFO: Container kube-controller-manager ready: true, restart count 10 Dec 17 13:15:06.200: INFO: kube-proxy-58v95 started at 2019-08-04 08:52:37 +0000 UTC (0+1 container statuses recorded) Dec 17 13:15:06.200: INFO: Container kube-proxy ready: true, restart count 0 Dec 17 13:15:06.200: INFO: kube-apiserver-iruya-server-sfge57q7djm7 started at 2019-08-04 08:51:39 +0000 UTC (0+1 container statuses recorded) Dec 17 13:15:06.200: INFO: Container kube-apiserver ready: true, restart count 0 Dec 17 13:15:06.200: INFO: kube-scheduler-iruya-server-sfge57q7djm7 started at 2019-08-04 08:51:43 +0000 UTC (0+1 container statuses recorded) Dec 17 13:15:06.200: INFO: Container kube-scheduler ready: true, restart count 7 Dec 17 13:15:06.200: INFO: coredns-5c98db65d4-xx8w8 started at 2019-08-04 08:53:12 +0000 UTC (0+1 container statuses recorded) Dec 17 13:15:06.200: INFO: Container coredns ready: true, restart count 0 W1217 13:15:06.213463 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Dec 17 13:15:06.270: INFO: Latency metrics for node iruya-server-sfge57q7djm7 Dec 17 13:15:06.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2076" for this suite. Dec 17 13:15:30.314: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 13:15:30.432: INFO: namespace statefulset-2076 deletion completed in 24.153599428s • Failure [348.644 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Should recreate evicted statefulset [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 17 13:14:52.048: Pod ss-0 expected to be re-created at least once /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:769 ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 13:15:30.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token Dec 17 13:15:31.107: INFO: created pod pod-service-account-defaultsa Dec 17 13:15:31.108: INFO: pod pod-service-account-defaultsa service account token volume mount: true Dec 17 13:15:31.195: INFO: created pod pod-service-account-mountsa Dec 17 13:15:31.195: INFO: pod pod-service-account-mountsa service account token volume mount: true Dec 17 13:15:31.241: INFO: created pod pod-service-account-nomountsa Dec 17 13:15:31.242: INFO: pod pod-service-account-nomountsa service account token volume mount: false Dec 17 13:15:31.252: INFO: created pod pod-service-account-defaultsa-mountspec Dec 17 13:15:31.252: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Dec 17 13:15:31.270: INFO: created pod pod-service-account-mountsa-mountspec Dec 17 13:15:31.270: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Dec 17 13:15:31.409: INFO: created pod pod-service-account-nomountsa-mountspec Dec 17 13:15:31.409: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Dec 17 13:15:31.461: INFO: created pod pod-service-account-defaultsa-nomountspec Dec 17 13:15:31.461: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Dec 17 13:15:31.564: INFO: created pod pod-service-account-mountsa-nomountspec Dec 17 13:15:31.565: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Dec 17 13:15:31.649: INFO: created pod pod-service-account-nomountsa-nomountspec Dec 17 13:15:31.649: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 13:15:31.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-9628" for this suite. Dec 17 13:15:59.624: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 13:15:59.741: INFO: namespace svcaccounts-9628 deletion completed in 27.907775868s • [SLOW TEST:29.309 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 13:15:59.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 17 13:15:59.983: INFO: Waiting up to 5m0s for pod "downwardapi-volume-663facf6-08dc-4dda-ace9-4cccb003f8af" in namespace "downward-api-9778" to be "success or failure" Dec 17 13:16:00.011: INFO: Pod "downwardapi-volume-663facf6-08dc-4dda-ace9-4cccb003f8af": Phase="Pending", Reason="", readiness=false. Elapsed: 26.801526ms Dec 17 13:16:02.020: INFO: Pod "downwardapi-volume-663facf6-08dc-4dda-ace9-4cccb003f8af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036230933s Dec 17 13:16:04.041: INFO: Pod "downwardapi-volume-663facf6-08dc-4dda-ace9-4cccb003f8af": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057312228s Dec 17 13:16:06.049: INFO: Pod "downwardapi-volume-663facf6-08dc-4dda-ace9-4cccb003f8af": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064596631s Dec 17 13:16:08.057: INFO: Pod "downwardapi-volume-663facf6-08dc-4dda-ace9-4cccb003f8af": Phase="Pending", Reason="", readiness=false. Elapsed: 8.072857854s Dec 17 13:16:10.067: INFO: Pod "downwardapi-volume-663facf6-08dc-4dda-ace9-4cccb003f8af": Phase="Pending", Reason="", readiness=false. Elapsed: 10.082682975s Dec 17 13:16:12.087: INFO: Pod "downwardapi-volume-663facf6-08dc-4dda-ace9-4cccb003f8af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.102989762s STEP: Saw pod success Dec 17 13:16:12.087: INFO: Pod "downwardapi-volume-663facf6-08dc-4dda-ace9-4cccb003f8af" satisfied condition "success or failure" Dec 17 13:16:12.096: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-663facf6-08dc-4dda-ace9-4cccb003f8af container client-container: STEP: delete the pod Dec 17 13:16:12.253: INFO: Waiting for pod downwardapi-volume-663facf6-08dc-4dda-ace9-4cccb003f8af to disappear Dec 17 13:16:12.264: INFO: Pod downwardapi-volume-663facf6-08dc-4dda-ace9-4cccb003f8af no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 13:16:12.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9778" for this suite. Dec 17 13:16:20.307: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 13:16:20.431: INFO: namespace downward-api-9778 deletion completed in 8.160117578s • [SLOW TEST:20.689 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 13:16:20.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 17 13:16:20.804: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"8bb06af5-40ca-4284-a8b7-06eb28570e23", Controller:(*bool)(0xc002ea6942), BlockOwnerDeletion:(*bool)(0xc002ea6943)}} Dec 17 13:16:20.839: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"ce6d9968-1552-46b3-acd9-a625229b3a84", Controller:(*bool)(0xc002ae332a), BlockOwnerDeletion:(*bool)(0xc002ae332b)}} Dec 17 13:16:20.904: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"4276dc0f-3d07-4e1a-84ad-899b7618aa60", Controller:(*bool)(0xc002ae34ca), BlockOwnerDeletion:(*bool)(0xc002ae34cb)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 13:16:25.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5460" for this suite. Dec 17 13:16:32.039: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 13:16:32.123: INFO: namespace gc-5460 deletion completed in 6.133574158s • [SLOW TEST:11.692 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 13:16:32.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-b77320c4-29af-40cb-86a4-bfba31f1fc6d STEP: Creating a pod to test consume secrets Dec 17 13:16:32.327: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-30439e27-d0f0-42d7-91d0-aa169968f22d" in namespace "projected-6996" to be "success or failure" Dec 17 13:16:32.336: INFO: Pod "pod-projected-secrets-30439e27-d0f0-42d7-91d0-aa169968f22d": Phase="Pending", Reason="", readiness=false. Elapsed: 7.737459ms Dec 17 13:16:34.346: INFO: Pod "pod-projected-secrets-30439e27-d0f0-42d7-91d0-aa169968f22d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017538328s Dec 17 13:16:36.359: INFO: Pod "pod-projected-secrets-30439e27-d0f0-42d7-91d0-aa169968f22d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031077383s Dec 17 13:16:38.379: INFO: Pod "pod-projected-secrets-30439e27-d0f0-42d7-91d0-aa169968f22d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050906855s Dec 17 13:16:40.388: INFO: Pod "pod-projected-secrets-30439e27-d0f0-42d7-91d0-aa169968f22d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.060168955s Dec 17 13:16:42.396: INFO: Pod "pod-projected-secrets-30439e27-d0f0-42d7-91d0-aa169968f22d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.067795826s Dec 17 13:16:44.403: INFO: Pod "pod-projected-secrets-30439e27-d0f0-42d7-91d0-aa169968f22d": Phase="Pending", Reason="", readiness=false. Elapsed: 12.074576262s Dec 17 13:16:46.419: INFO: Pod "pod-projected-secrets-30439e27-d0f0-42d7-91d0-aa169968f22d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.090497902s STEP: Saw pod success Dec 17 13:16:46.419: INFO: Pod "pod-projected-secrets-30439e27-d0f0-42d7-91d0-aa169968f22d" satisfied condition "success or failure" Dec 17 13:16:46.425: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-30439e27-d0f0-42d7-91d0-aa169968f22d container projected-secret-volume-test: STEP: delete the pod Dec 17 13:16:46.544: INFO: Waiting for pod pod-projected-secrets-30439e27-d0f0-42d7-91d0-aa169968f22d to disappear Dec 17 13:16:46.707: INFO: Pod pod-projected-secrets-30439e27-d0f0-42d7-91d0-aa169968f22d no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 13:16:46.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6996" for this suite. Dec 17 13:16:52.769: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 13:16:52.911: INFO: namespace projected-6996 deletion completed in 6.184608467s • [SLOW TEST:20.787 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 13:16:52.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 17 13:16:53.159: INFO: Pod name rollover-pod: Found 0 pods out of 1 Dec 17 13:16:58.178: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Dec 17 13:17:04.198: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Dec 17 13:17:06.205: INFO: Creating deployment "test-rollover-deployment" Dec 17 13:17:06.235: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Dec 17 13:17:08.265: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Dec 17 13:17:08.280: INFO: Ensure that both replica sets have 1 created replica Dec 17 13:17:08.289: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Dec 17 13:17:08.300: INFO: Updating deployment test-rollover-deployment Dec 17 13:17:08.300: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Dec 17 13:17:10.441: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Dec 17 13:17:10.471: INFO: Make sure deployment "test-rollover-deployment" is complete Dec 17 13:17:10.485: INFO: all replica sets need to contain the pod-template-hash label Dec 17 13:17:10.486: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185426, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185426, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185429, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185426, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 17 13:17:12.509: INFO: all replica sets need to contain the pod-template-hash label Dec 17 13:17:12.509: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185426, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185426, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185429, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185426, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 17 13:17:14.561: INFO: all replica sets need to contain the pod-template-hash label Dec 17 13:17:14.562: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185426, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185426, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185429, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185426, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 17 13:17:16.504: INFO: all replica sets need to contain the pod-template-hash label Dec 17 13:17:16.504: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185426, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185426, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185429, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185426, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 17 13:17:18.516: INFO: all replica sets need to contain the pod-template-hash label Dec 17 13:17:18.516: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185426, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185426, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185429, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185426, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 17 13:17:20.509: INFO: all replica sets need to contain the pod-template-hash label Dec 17 13:17:20.509: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185426, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185426, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185429, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185426, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 17 13:17:22.508: INFO: all replica sets need to contain the pod-template-hash label Dec 17 13:17:22.509: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185426, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185426, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185441, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185426, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 17 13:17:24.514: INFO: all replica sets need to contain the pod-template-hash label Dec 17 13:17:24.515: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185426, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185426, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185441, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185426, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 17 13:17:26.508: INFO: all replica sets need to contain the pod-template-hash label Dec 17 13:17:26.508: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185426, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185426, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185441, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185426, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 17 13:17:28.511: INFO: all replica sets need to contain the pod-template-hash label Dec 17 13:17:28.511: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185426, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185426, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185441, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185426, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 17 13:17:30.512: INFO: all replica sets need to contain the pod-template-hash label Dec 17 13:17:30.513: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185426, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185426, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185441, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185426, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 17 13:17:32.616: INFO: Dec 17 13:17:32.616: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Dec 17 13:17:32.638: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-6694,SelfLink:/apis/apps/v1/namespaces/deployment-6694/deployments/test-rollover-deployment,UID:9f7f8c9a-b8b6-4ae9-b6cc-0abeee308f66,ResourceVersion:17011678,Generation:2,CreationTimestamp:2019-12-17 13:17:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-12-17 13:17:06 +0000 UTC 2019-12-17 13:17:06 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-12-17 13:17:32 +0000 UTC 2019-12-17 13:17:06 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Dec 17 13:17:32.658: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-6694,SelfLink:/apis/apps/v1/namespaces/deployment-6694/replicasets/test-rollover-deployment-854595fc44,UID:91d87e18-99ed-408a-93b0-cc86e4c5dd5d,ResourceVersion:17011667,Generation:2,CreationTimestamp:2019-12-17 13:17:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 9f7f8c9a-b8b6-4ae9-b6cc-0abeee308f66 0xc001dfaa27 0xc001dfaa28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Dec 17 13:17:32.658: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Dec 17 13:17:32.659: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-6694,SelfLink:/apis/apps/v1/namespaces/deployment-6694/replicasets/test-rollover-controller,UID:53c0f459-1bfb-4d4f-8870-9a7e3e85775f,ResourceVersion:17011676,Generation:2,CreationTimestamp:2019-12-17 13:16:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 9f7f8c9a-b8b6-4ae9-b6cc-0abeee308f66 0xc001dfa957 0xc001dfa958}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Dec 17 13:17:32.659: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-6694,SelfLink:/apis/apps/v1/namespaces/deployment-6694/replicasets/test-rollover-deployment-9b8b997cf,UID:5feebae4-7a5c-4ec6-a51b-4fb3f64ef8d4,ResourceVersion:17011625,Generation:2,CreationTimestamp:2019-12-17 13:17:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 9f7f8c9a-b8b6-4ae9-b6cc-0abeee308f66 0xc001dfaaf0 0xc001dfaaf1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Dec 17 13:17:32.666: INFO: Pod "test-rollover-deployment-854595fc44-mhgq6" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-mhgq6,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-6694,SelfLink:/api/v1/namespaces/deployment-6694/pods/test-rollover-deployment-854595fc44-mhgq6,UID:ff01edd6-6273-45dd-88ad-36743d17215d,ResourceVersion:17011651,Generation:0,CreationTimestamp:2019-12-17 13:17:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 91d87e18-99ed-408a-93b0-cc86e4c5dd5d 0xc001f923b7 0xc001f923b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kkvgb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kkvgb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-kkvgb true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f92430} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f92450}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:17:09 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:17:21 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:17:21 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:17:08 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.3,StartTime:2019-12-17 13:17:09 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-12-17 13:17:21 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://727ed4dcaab6026173eb396ef1dc86699106db7b3f3326f24ea9aebceec77290}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 13:17:32.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6694" for this suite. Dec 17 13:17:40.702: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 13:17:40.818: INFO: namespace deployment-6694 deletion completed in 8.148598338s • [SLOW TEST:47.906 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 13:17:40.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 17 13:17:41.017: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b6641547-fae2-453b-b9f1-3658ff5135f6" in namespace "downward-api-6939" to be "success or failure" Dec 17 13:17:41.076: INFO: Pod "downwardapi-volume-b6641547-fae2-453b-b9f1-3658ff5135f6": Phase="Pending", Reason="", readiness=false. Elapsed: 57.970505ms Dec 17 13:17:43.083: INFO: Pod "downwardapi-volume-b6641547-fae2-453b-b9f1-3658ff5135f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065382403s Dec 17 13:17:45.093: INFO: Pod "downwardapi-volume-b6641547-fae2-453b-b9f1-3658ff5135f6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075388227s Dec 17 13:17:47.103: INFO: Pod "downwardapi-volume-b6641547-fae2-453b-b9f1-3658ff5135f6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.08507386s Dec 17 13:17:49.111: INFO: Pod "downwardapi-volume-b6641547-fae2-453b-b9f1-3658ff5135f6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.093673141s Dec 17 13:17:51.123: INFO: Pod "downwardapi-volume-b6641547-fae2-453b-b9f1-3658ff5135f6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.105654277s Dec 17 13:17:53.131: INFO: Pod "downwardapi-volume-b6641547-fae2-453b-b9f1-3658ff5135f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.113243492s STEP: Saw pod success Dec 17 13:17:53.131: INFO: Pod "downwardapi-volume-b6641547-fae2-453b-b9f1-3658ff5135f6" satisfied condition "success or failure" Dec 17 13:17:53.135: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-b6641547-fae2-453b-b9f1-3658ff5135f6 container client-container: STEP: delete the pod Dec 17 13:17:53.582: INFO: Waiting for pod downwardapi-volume-b6641547-fae2-453b-b9f1-3658ff5135f6 to disappear Dec 17 13:17:53.600: INFO: Pod downwardapi-volume-b6641547-fae2-453b-b9f1-3658ff5135f6 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 13:17:53.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6939" for this suite. Dec 17 13:17:59.642: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 13:17:59.758: INFO: namespace downward-api-6939 deletion completed in 6.147341896s • [SLOW TEST:18.939 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 13:17:59.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-c317c384-f7df-4614-b9b4-8ece33b129b5 STEP: Creating secret with name s-test-opt-upd-776bb712-4ebc-46aa-ab3d-beb081e69142 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-c317c384-f7df-4614-b9b4-8ece33b129b5 STEP: Updating secret s-test-opt-upd-776bb712-4ebc-46aa-ab3d-beb081e69142 STEP: Creating secret with name s-test-opt-create-7cc3e9a2-8f60-4445-ad3f-57c24bacfa48 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 13:19:19.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-68" for this suite. Dec 17 13:19:41.633: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 13:19:41.724: INFO: namespace secrets-68 deletion completed in 22.137214243s • [SLOW TEST:101.966 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 13:19:41.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Dec 17 13:19:42.013: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8519' Dec 17 13:19:42.887: INFO: stderr: "" Dec 17 13:19:42.887: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Dec 17 13:19:42.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8519' Dec 17 13:19:43.164: INFO: stderr: "" Dec 17 13:19:43.165: INFO: stdout: "update-demo-nautilus-p9lct update-demo-nautilus-r72ls " Dec 17 13:19:43.165: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-p9lct -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8519' Dec 17 13:19:43.354: INFO: stderr: "" Dec 17 13:19:43.354: INFO: stdout: "" Dec 17 13:19:43.354: INFO: update-demo-nautilus-p9lct is created but not running Dec 17 13:19:48.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8519' Dec 17 13:19:49.626: INFO: stderr: "" Dec 17 13:19:49.626: INFO: stdout: "update-demo-nautilus-p9lct update-demo-nautilus-r72ls " Dec 17 13:19:49.627: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-p9lct -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8519' Dec 17 13:19:50.015: INFO: stderr: "" Dec 17 13:19:50.015: INFO: stdout: "" Dec 17 13:19:50.015: INFO: update-demo-nautilus-p9lct is created but not running Dec 17 13:19:55.015: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8519' Dec 17 13:19:55.172: INFO: stderr: "" Dec 17 13:19:55.172: INFO: stdout: "update-demo-nautilus-p9lct update-demo-nautilus-r72ls " Dec 17 13:19:55.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-p9lct -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8519' Dec 17 13:19:55.289: INFO: stderr: "" Dec 17 13:19:55.290: INFO: stdout: "true" Dec 17 13:19:55.290: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-p9lct -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8519' Dec 17 13:19:55.396: INFO: stderr: "" Dec 17 13:19:55.397: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 17 13:19:55.397: INFO: validating pod update-demo-nautilus-p9lct Dec 17 13:19:55.416: INFO: got data: { "image": "nautilus.jpg" } Dec 17 13:19:55.416: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 17 13:19:55.416: INFO: update-demo-nautilus-p9lct is verified up and running Dec 17 13:19:55.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r72ls -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8519' Dec 17 13:19:55.505: INFO: stderr: "" Dec 17 13:19:55.505: INFO: stdout: "true" Dec 17 13:19:55.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r72ls -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8519' Dec 17 13:19:55.601: INFO: stderr: "" Dec 17 13:19:55.601: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 17 13:19:55.601: INFO: validating pod update-demo-nautilus-r72ls Dec 17 13:19:55.608: INFO: got data: { "image": "nautilus.jpg" } Dec 17 13:19:55.608: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 17 13:19:55.608: INFO: update-demo-nautilus-r72ls is verified up and running STEP: scaling down the replication controller Dec 17 13:19:55.610: INFO: scanned /root for discovery docs: Dec 17 13:19:55.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-8519' Dec 17 13:19:56.754: INFO: stderr: "" Dec 17 13:19:56.755: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Dec 17 13:19:56.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8519' Dec 17 13:19:56.910: INFO: stderr: "" Dec 17 13:19:56.910: INFO: stdout: "update-demo-nautilus-p9lct update-demo-nautilus-r72ls " STEP: Replicas for name=update-demo: expected=1 actual=2 Dec 17 13:20:01.911: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8519' Dec 17 13:20:02.087: INFO: stderr: "" Dec 17 13:20:02.087: INFO: stdout: "update-demo-nautilus-p9lct update-demo-nautilus-r72ls " STEP: Replicas for name=update-demo: expected=1 actual=2 Dec 17 13:20:07.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8519' Dec 17 13:20:07.199: INFO: stderr: "" Dec 17 13:20:07.199: INFO: stdout: "update-demo-nautilus-r72ls " Dec 17 13:20:07.199: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r72ls -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8519' Dec 17 13:20:07.300: INFO: stderr: "" Dec 17 13:20:07.300: INFO: stdout: "true" Dec 17 13:20:07.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r72ls -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8519' Dec 17 13:20:07.391: INFO: stderr: "" Dec 17 13:20:07.391: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 17 13:20:07.391: INFO: validating pod update-demo-nautilus-r72ls Dec 17 13:20:07.396: INFO: got data: { "image": "nautilus.jpg" } Dec 17 13:20:07.396: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 17 13:20:07.397: INFO: update-demo-nautilus-r72ls is verified up and running STEP: scaling up the replication controller Dec 17 13:20:07.399: INFO: scanned /root for discovery docs: Dec 17 13:20:07.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-8519' Dec 17 13:20:08.625: INFO: stderr: "" Dec 17 13:20:08.625: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Dec 17 13:20:08.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8519' Dec 17 13:20:08.806: INFO: stderr: "" Dec 17 13:20:08.807: INFO: stdout: "update-demo-nautilus-ldh7x update-demo-nautilus-r72ls " Dec 17 13:20:08.807: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ldh7x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8519' Dec 17 13:20:08.982: INFO: stderr: "" Dec 17 13:20:08.982: INFO: stdout: "" Dec 17 13:20:08.982: INFO: update-demo-nautilus-ldh7x is created but not running Dec 17 13:20:13.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8519' Dec 17 13:20:14.119: INFO: stderr: "" Dec 17 13:20:14.119: INFO: stdout: "update-demo-nautilus-ldh7x update-demo-nautilus-r72ls " Dec 17 13:20:14.119: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ldh7x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8519' Dec 17 13:20:14.261: INFO: stderr: "" Dec 17 13:20:14.261: INFO: stdout: "" Dec 17 13:20:14.261: INFO: update-demo-nautilus-ldh7x is created but not running Dec 17 13:20:19.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8519' Dec 17 13:20:19.462: INFO: stderr: "" Dec 17 13:20:19.462: INFO: stdout: "update-demo-nautilus-ldh7x update-demo-nautilus-r72ls " Dec 17 13:20:19.463: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ldh7x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8519' Dec 17 13:20:19.619: INFO: stderr: "" Dec 17 13:20:19.620: INFO: stdout: "true" Dec 17 13:20:19.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ldh7x -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8519' Dec 17 13:20:19.735: INFO: stderr: "" Dec 17 13:20:19.735: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 17 13:20:19.735: INFO: validating pod update-demo-nautilus-ldh7x Dec 17 13:20:19.752: INFO: got data: { "image": "nautilus.jpg" } Dec 17 13:20:19.752: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 17 13:20:19.752: INFO: update-demo-nautilus-ldh7x is verified up and running Dec 17 13:20:19.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r72ls -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8519' Dec 17 13:20:19.891: INFO: stderr: "" Dec 17 13:20:19.891: INFO: stdout: "true" Dec 17 13:20:19.892: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r72ls -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8519' Dec 17 13:20:20.013: INFO: stderr: "" Dec 17 13:20:20.013: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 17 13:20:20.013: INFO: validating pod update-demo-nautilus-r72ls Dec 17 13:20:20.062: INFO: got data: { "image": "nautilus.jpg" } Dec 17 13:20:20.062: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 17 13:20:20.062: INFO: update-demo-nautilus-r72ls is verified up and running STEP: using delete to clean up resources Dec 17 13:20:20.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8519' Dec 17 13:20:20.164: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 17 13:20:20.164: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Dec 17 13:20:20.165: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8519' Dec 17 13:20:20.344: INFO: stderr: "No resources found.\n" Dec 17 13:20:20.344: INFO: stdout: "" Dec 17 13:20:20.344: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8519 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Dec 17 13:20:20.529: INFO: stderr: "" Dec 17 13:20:20.529: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 13:20:20.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8519" for this suite. Dec 17 13:20:44.580: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 13:20:44.694: INFO: namespace kubectl-8519 deletion completed in 24.146369167s • [SLOW TEST:62.969 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 13:20:44.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating pod Dec 17 13:20:54.900: INFO: Pod pod-hostip-6a227698-968b-4e28-89c0-37fdc8a0780e has hostIP: 10.96.3.65 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 13:20:54.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-972" for this suite. Dec 17 13:21:17.703: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 13:21:17.812: INFO: namespace pods-972 deletion completed in 22.90297707s • [SLOW TEST:33.117 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 13:21:17.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 17 13:21:17.874: INFO: Creating deployment "nginx-deployment" Dec 17 13:21:17.884: INFO: Waiting for observed generation 1 Dec 17 13:21:21.070: INFO: Waiting for all required pods to come up Dec 17 13:21:21.426: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Dec 17 13:21:51.690: INFO: Waiting for deployment "nginx-deployment" to complete Dec 17 13:21:51.699: INFO: Updating deployment "nginx-deployment" with a non-existent image Dec 17 13:21:51.709: INFO: Updating deployment nginx-deployment Dec 17 13:21:51.709: INFO: Waiting for observed generation 2 Dec 17 13:21:54.954: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Dec 17 13:21:54.974: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Dec 17 13:21:56.627: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Dec 17 13:21:56.653: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Dec 17 13:21:56.653: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Dec 17 13:21:56.665: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Dec 17 13:21:56.833: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Dec 17 13:21:56.833: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Dec 17 13:21:56.848: INFO: Updating deployment nginx-deployment Dec 17 13:21:56.848: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Dec 17 13:21:57.036: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Dec 17 13:21:58.675: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Dec 17 13:22:05.604: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-7177,SelfLink:/apis/apps/v1/namespaces/deployment-7177/deployments/nginx-deployment,UID:2095e349-8a71-411d-8163-b3fc8eaf0df5,ResourceVersion:17012414,Generation:3,CreationTimestamp:2019-12-17 13:21:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2019-12-17 13:21:54 +0000 UTC 2019-12-17 13:21:17 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.} {Available False 2019-12-17 13:21:57 +0000 UTC 2019-12-17 13:21:57 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} Dec 17 13:22:07.064: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-7177,SelfLink:/apis/apps/v1/namespaces/deployment-7177/replicasets/nginx-deployment-55fb7cb77f,UID:9516b747-e974-47f3-836a-a80b7d163560,ResourceVersion:17012457,Generation:3,CreationTimestamp:2019-12-17 13:21:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 2095e349-8a71-411d-8163-b3fc8eaf0df5 0xc002d61c57 0xc002d61c58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Dec 17 13:22:07.064: INFO: All old ReplicaSets of Deployment "nginx-deployment": Dec 17 13:22:07.065: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-7177,SelfLink:/apis/apps/v1/namespaces/deployment-7177/replicasets/nginx-deployment-7b8c6f4498,UID:44df176f-b46e-4e58-ba06-fd468da6dc37,ResourceVersion:17012456,Generation:3,CreationTimestamp:2019-12-17 13:21:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 2095e349-8a71-411d-8163-b3fc8eaf0df5 0xc002d61d27 0xc002d61d28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Dec 17 13:22:09.951: INFO: Pod "nginx-deployment-55fb7cb77f-2fmml" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-2fmml,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7177,SelfLink:/api/v1/namespaces/deployment-7177/pods/nginx-deployment-55fb7cb77f-2fmml,UID:696933fb-a2a0-48ef-989a-b078e111a40e,ResourceVersion:17012394,Generation:0,CreationTimestamp:2019-12-17 13:21:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 9516b747-e974-47f3-836a-a80b7d163560 0xc00256bc87 0xc00256bc88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6627b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6627b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-6627b true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00256bcf0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00256bd10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:21:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:21:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:21:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:21:52 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2019-12-17 13:21:54 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 17 13:22:09.951: INFO: Pod "nginx-deployment-55fb7cb77f-7zbwr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-7zbwr,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7177,SelfLink:/api/v1/namespaces/deployment-7177/pods/nginx-deployment-55fb7cb77f-7zbwr,UID:27d273b8-9db0-4e44-9f6a-a6ab1be9725c,ResourceVersion:17012378,Generation:0,CreationTimestamp:2019-12-17 13:21:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 9516b747-e974-47f3-836a-a80b7d163560 0xc00256bde7 0xc00256bde8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6627b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6627b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-6627b true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00256be60} {node.kubernetes.io/unreachable Exists NoExecute 0xc00256be80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:21:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:21:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:21:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:21:51 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-17 13:21:52 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 17 13:22:09.951: INFO: Pod "nginx-deployment-55fb7cb77f-86sfv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-86sfv,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7177,SelfLink:/api/v1/namespaces/deployment-7177/pods/nginx-deployment-55fb7cb77f-86sfv,UID:d3545923-6be5-4af7-a532-47daa72bd526,ResourceVersion:17012445,Generation:0,CreationTimestamp:2019-12-17 13:21:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 9516b747-e974-47f3-836a-a80b7d163560 0xc00256bf57 0xc00256bf58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6627b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6627b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-6627b true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00256bfd0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00256bff0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:22:01 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 17 13:22:09.952: INFO: Pod "nginx-deployment-55fb7cb77f-94fxm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-94fxm,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7177,SelfLink:/api/v1/namespaces/deployment-7177/pods/nginx-deployment-55fb7cb77f-94fxm,UID:5c5ea72f-f6b8-499c-ae72-2e90675414d8,ResourceVersion:17012481,Generation:0,CreationTimestamp:2019-12-17 13:21:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 9516b747-e974-47f3-836a-a80b7d163560 0xc001f60077 0xc001f60078}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6627b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6627b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-6627b true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f600f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f60110}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:22:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:22:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:22:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:21:58 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-17 13:22:02 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 17 13:22:09.952: INFO: Pod "nginx-deployment-55fb7cb77f-d7vzb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-d7vzb,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7177,SelfLink:/api/v1/namespaces/deployment-7177/pods/nginx-deployment-55fb7cb77f-d7vzb,UID:638e1b73-2f99-4dd1-bd6c-d8c7ea29007e,ResourceVersion:17012435,Generation:0,CreationTimestamp:2019-12-17 13:21:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 9516b747-e974-47f3-836a-a80b7d163560 0xc001f601e7 0xc001f601e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6627b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6627b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-6627b true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f60250} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f60270}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:21:58 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 17 13:22:09.952: INFO: Pod "nginx-deployment-55fb7cb77f-f9tw6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-f9tw6,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7177,SelfLink:/api/v1/namespaces/deployment-7177/pods/nginx-deployment-55fb7cb77f-f9tw6,UID:5f994d37-90ac-404a-93f6-d15a0726aac9,ResourceVersion:17012443,Generation:0,CreationTimestamp:2019-12-17 13:21:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 9516b747-e974-47f3-836a-a80b7d163560 0xc001f602f7 0xc001f602f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6627b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6627b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-6627b true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f60390} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f603b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:22:01 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 17 13:22:09.952: INFO: Pod "nginx-deployment-55fb7cb77f-gh4rx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-gh4rx,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7177,SelfLink:/api/v1/namespaces/deployment-7177/pods/nginx-deployment-55fb7cb77f-gh4rx,UID:f2f1191f-6650-48cd-a73b-7385f8f563ef,ResourceVersion:17012441,Generation:0,CreationTimestamp:2019-12-17 13:21:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 9516b747-e974-47f3-836a-a80b7d163560 0xc001f60437 0xc001f60438}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6627b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6627b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-6627b true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f604c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f604e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:22:01 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 17 13:22:09.952: INFO: Pod "nginx-deployment-55fb7cb77f-nm6vb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-nm6vb,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7177,SelfLink:/api/v1/namespaces/deployment-7177/pods/nginx-deployment-55fb7cb77f-nm6vb,UID:5ca008f0-27e2-40f4-b8df-a12d409e421a,ResourceVersion:17012471,Generation:0,CreationTimestamp:2019-12-17 13:21:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 9516b747-e974-47f3-836a-a80b7d163560 0xc001f60567 0xc001f60568}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6627b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6627b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-6627b true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f605d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f605f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:22:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:22:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:22:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:21:57 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2019-12-17 13:22:03 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 17 13:22:09.953: INFO: Pod "nginx-deployment-55fb7cb77f-rn9p6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-rn9p6,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7177,SelfLink:/api/v1/namespaces/deployment-7177/pods/nginx-deployment-55fb7cb77f-rn9p6,UID:5e00c2a1-9b4e-4b4a-b7f2-890c5998b28a,ResourceVersion:17012373,Generation:0,CreationTimestamp:2019-12-17 13:21:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 9516b747-e974-47f3-836a-a80b7d163560 0xc001f606c7 0xc001f606c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6627b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6627b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-6627b true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f60730} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f60800}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:21:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:21:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:21:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:21:51 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2019-12-17 13:21:52 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 17 13:22:09.953: INFO: Pod "nginx-deployment-55fb7cb77f-rshmv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-rshmv,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7177,SelfLink:/api/v1/namespaces/deployment-7177/pods/nginx-deployment-55fb7cb77f-rshmv,UID:14be9e7b-e577-4c9c-ad14-69fcb220e8f9,ResourceVersion:17012444,Generation:0,CreationTimestamp:2019-12-17 13:21:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 9516b747-e974-47f3-836a-a80b7d163560 0xc001f60957 0xc001f60958}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6627b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6627b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-6627b true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f609e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f60a50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:22:01 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 17 13:22:09.953: INFO: Pod "nginx-deployment-55fb7cb77f-sdmps" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-sdmps,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7177,SelfLink:/api/v1/namespaces/deployment-7177/pods/nginx-deployment-55fb7cb77f-sdmps,UID:2b97470f-12c4-46d9-96d6-7169b3f539d4,ResourceVersion:17012453,Generation:0,CreationTimestamp:2019-12-17 13:22:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 9516b747-e974-47f3-836a-a80b7d163560 0xc001f60b17 0xc001f60b18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6627b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6627b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-6627b true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f60ba0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f60bc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:22:02 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 17 13:22:09.953: INFO: Pod "nginx-deployment-55fb7cb77f-vdzcn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-vdzcn,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7177,SelfLink:/api/v1/namespaces/deployment-7177/pods/nginx-deployment-55fb7cb77f-vdzcn,UID:626a5f25-f9c7-47d4-ade2-3968e9b99ae6,ResourceVersion:17012364,Generation:0,CreationTimestamp:2019-12-17 13:21:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 9516b747-e974-47f3-836a-a80b7d163560 0xc001f60c77 0xc001f60c78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6627b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6627b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-6627b true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f60d90} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f60de0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:21:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:21:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:21:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:21:51 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-17 13:21:51 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 17 13:22:09.954: INFO: Pod "nginx-deployment-55fb7cb77f-vm4sk" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-vm4sk,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7177,SelfLink:/api/v1/namespaces/deployment-7177/pods/nginx-deployment-55fb7cb77f-vm4sk,UID:5ecd7361-1619-4de7-93df-ce986a4ceb45,ResourceVersion:17012397,Generation:0,CreationTimestamp:2019-12-17 13:21:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 9516b747-e974-47f3-836a-a80b7d163560 0xc001f60f57 0xc001f60f58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6627b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6627b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-6627b true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f61020} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f61040}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:21:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:21:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:21:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:21:52 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-17 13:21:56 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 17 13:22:09.954: INFO: Pod "nginx-deployment-7b8c6f4498-42rhd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-42rhd,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7177,SelfLink:/api/v1/namespaces/deployment-7177/pods/nginx-deployment-7b8c6f4498-42rhd,UID:4f6f063c-c3d8-4395-8711-79bcc253849d,ResourceVersion:17012462,Generation:0,CreationTimestamp:2019-12-17 13:21:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 44df176f-b46e-4e58-ba06-fd468da6dc37 0xc001f61177 0xc001f61178}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6627b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6627b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6627b true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f61290} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f612b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:22:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:22:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:22:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:21:57 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2019-12-17 13:22:01 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 17 13:22:09.954: INFO: Pod "nginx-deployment-7b8c6f4498-4ndg8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-4ndg8,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7177,SelfLink:/api/v1/namespaces/deployment-7177/pods/nginx-deployment-7b8c6f4498-4ndg8,UID:8580ecfc-ebe3-4c93-a7b8-ea7b21e35187,ResourceVersion:17012465,Generation:0,CreationTimestamp:2019-12-17 13:21:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 44df176f-b46e-4e58-ba06-fd468da6dc37 0xc001f613f7 0xc001f613f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6627b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6627b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6627b true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f614a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f614c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:21:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:21:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:21:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:21:57 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-17 13:21:59 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 17 13:22:09.954: INFO: Pod "nginx-deployment-7b8c6f4498-72glq" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-72glq,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7177,SelfLink:/api/v1/namespaces/deployment-7177/pods/nginx-deployment-7b8c6f4498-72glq,UID:4c774edb-98d8-4001-85cb-01af46d0c21f,ResourceVersion:17012324,Generation:0,CreationTimestamp:2019-12-17 13:21:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 44df176f-b46e-4e58-ba06-fd468da6dc37 0xc001f61587 0xc001f61588}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6627b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6627b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6627b true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f61780} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f617a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:21:18 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:21:49 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:21:49 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:21:18 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.5,StartTime:2019-12-17 13:21:18 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-17 13:21:48 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://db25bf4f082439ec3f534cc6196e110775fe84137a21c1d0ab6325feb7c47dd4}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 17 13:22:09.955: INFO: Pod "nginx-deployment-7b8c6f4498-88zbw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-88zbw,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7177,SelfLink:/api/v1/namespaces/deployment-7177/pods/nginx-deployment-7b8c6f4498-88zbw,UID:4b640082-3da5-4506-b2b2-8e56bb3eca09,ResourceVersion:17012449,Generation:0,CreationTimestamp:2019-12-17 13:21:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 44df176f-b46e-4e58-ba06-fd468da6dc37 0xc001f618f7 0xc001f618f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6627b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6627b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6627b true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f61990} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f619b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:22:01 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 17 13:22:09.955: INFO: Pod "nginx-deployment-7b8c6f4498-cfk2x" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-cfk2x,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7177,SelfLink:/api/v1/namespaces/deployment-7177/pods/nginx-deployment-7b8c6f4498-cfk2x,UID:603ec980-31ed-4299-ab8a-8998e91e1f09,ResourceVersion:17012309,Generation:0,CreationTimestamp:2019-12-17 13:21:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 44df176f-b46e-4e58-ba06-fd468da6dc37 0xc001f61a97 0xc001f61a98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6627b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6627b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6627b true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f61ba0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f61bc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:21:18 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:21:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:21:48 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:21:17 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2019-12-17 13:21:18 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-17 13:21:47 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://367823b6872e52014a045451c6cec7920392d89b961a20b824bdf0f52c840ec4}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 17 13:22:09.955: INFO: Pod "nginx-deployment-7b8c6f4498-f5w8k" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-f5w8k,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7177,SelfLink:/api/v1/namespaces/deployment-7177/pods/nginx-deployment-7b8c6f4498-f5w8k,UID:af9e4974-f246-4f9d-89f9-93d35c1df528,ResourceVersion:17012439,Generation:0,CreationTimestamp:2019-12-17 13:21:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 44df176f-b46e-4e58-ba06-fd468da6dc37 0xc001f61d27 0xc001f61d28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6627b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6627b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6627b true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f61de0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f61e00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:21:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:21:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:21:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:21:57 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-17 13:21:58 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 17 13:22:09.955: INFO: Pod "nginx-deployment-7b8c6f4498-ff6rh" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-ff6rh,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7177,SelfLink:/api/v1/namespaces/deployment-7177/pods/nginx-deployment-7b8c6f4498-ff6rh,UID:c65a67bf-553e-4821-b6cb-11e1b09055ac,ResourceVersion:17012320,Generation:0,CreationTimestamp:2019-12-17 13:21:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 44df176f-b46e-4e58-ba06-fd468da6dc37 0xc001f61ec7 0xc001f61ec8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6627b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6627b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6627b true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f61f40} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f61f60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:21:18 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:21:49 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:21:49 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:21:18 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.4,StartTime:2019-12-17 13:21:18 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-17 13:21:48 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://8370a5d818cae1ee2653d7bdfc3dbb8d790707e0cf6be6cee248f836fbb6eab5}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 17 13:22:09.955: INFO: Pod "nginx-deployment-7b8c6f4498-fzqgq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-fzqgq,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7177,SelfLink:/api/v1/namespaces/deployment-7177/pods/nginx-deployment-7b8c6f4498-fzqgq,UID:9c100cae-2c93-4cd1-86d7-324651af4e0e,ResourceVersion:17012438,Generation:0,CreationTimestamp:2019-12-17 13:21:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 44df176f-b46e-4e58-ba06-fd468da6dc37 0xc000d90037 0xc000d90038}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6627b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6627b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6627b true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000d900a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000d900c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:21:58 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 17 13:22:09.955: INFO: Pod "nginx-deployment-7b8c6f4498-g8llv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-g8llv,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7177,SelfLink:/api/v1/namespaces/deployment-7177/pods/nginx-deployment-7b8c6f4498-g8llv,UID:2388ac66-8c0d-4d1d-9131-4272fcea1697,ResourceVersion:17012431,Generation:0,CreationTimestamp:2019-12-17 13:21:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 44df176f-b46e-4e58-ba06-fd468da6dc37 0xc000d90147 0xc000d90148}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6627b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6627b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6627b true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000d901c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000d901e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:21:58 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 17 13:22:09.955: INFO: Pod "nginx-deployment-7b8c6f4498-gb7fv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-gb7fv,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7177,SelfLink:/api/v1/namespaces/deployment-7177/pods/nginx-deployment-7b8c6f4498-gb7fv,UID:bfc2f8f7-197f-450f-82cb-45549a765678,ResourceVersion:17012448,Generation:0,CreationTimestamp:2019-12-17 13:21:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 44df176f-b46e-4e58-ba06-fd468da6dc37 0xc000d90267 0xc000d90268}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6627b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6627b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6627b true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000d902f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000d90320}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:22:01 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 17 13:22:09.956: INFO: Pod "nginx-deployment-7b8c6f4498-jm7c9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-jm7c9,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7177,SelfLink:/api/v1/namespaces/deployment-7177/pods/nginx-deployment-7b8c6f4498-jm7c9,UID:9deac652-42f5-463e-96a8-d11650b05a6b,ResourceVersion:17012446,Generation:0,CreationTimestamp:2019-12-17 13:21:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 44df176f-b46e-4e58-ba06-fd468da6dc37 0xc000d903a7 0xc000d903a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6627b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6627b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6627b true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000d90420} {node.kubernetes.io/unreachable Exists NoExecute 0xc000d90440}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:22:01 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 17 13:22:09.956: INFO: Pod "nginx-deployment-7b8c6f4498-jrxc5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-jrxc5,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7177,SelfLink:/api/v1/namespaces/deployment-7177/pods/nginx-deployment-7b8c6f4498-jrxc5,UID:c7bb613c-b9eb-44e3-8a04-6392ce22bd4c,ResourceVersion:17012451,Generation:0,CreationTimestamp:2019-12-17 13:21:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 44df176f-b46e-4e58-ba06-fd468da6dc37 0xc000d904c7 0xc000d904c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6627b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6627b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6627b true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000d90550} {node.kubernetes.io/unreachable Exists NoExecute 0xc000d90570}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:22:01 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 17 13:22:09.956: INFO: Pod "nginx-deployment-7b8c6f4498-jzfjj" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-jzfjj,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7177,SelfLink:/api/v1/namespaces/deployment-7177/pods/nginx-deployment-7b8c6f4498-jzfjj,UID:02e40132-8e88-4f75-9d9c-e553349e7c98,ResourceVersion:17012317,Generation:0,CreationTimestamp:2019-12-17 13:21:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 44df176f-b46e-4e58-ba06-fd468da6dc37 0xc000d905f7 0xc000d905f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6627b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6627b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6627b true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000d90690} {node.kubernetes.io/unreachable Exists NoExecute 0xc000d906c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:21:18 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:21:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:21:48 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:21:18 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.6,StartTime:2019-12-17 13:21:18 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-17 13:21:48 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://fc4a0a1d81dd480d941ebba03fc8249b747fb8f5a49173e5fbd35daf0b90a723}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 17 13:22:09.956: INFO: Pod "nginx-deployment-7b8c6f4498-nmlhz" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-nmlhz,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7177,SelfLink:/api/v1/namespaces/deployment-7177/pods/nginx-deployment-7b8c6f4498-nmlhz,UID:ab606912-50eb-4502-ba0f-648d98366ac2,ResourceVersion:17012305,Generation:0,CreationTimestamp:2019-12-17 13:21:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 44df176f-b46e-4e58-ba06-fd468da6dc37 0xc000d90847 0xc000d90848}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6627b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6627b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6627b true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000d90a00} {node.kubernetes.io/unreachable Exists NoExecute 0xc000d90a50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:21:18 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:21:47 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:21:47 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:21:17 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2019-12-17 13:21:18 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-17 13:21:42 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://953af58c61eaad0e6fd6345e5ae6fc5becf75b375910ca77edbf5e049c635700}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 17 13:22:09.956: INFO: Pod "nginx-deployment-7b8c6f4498-q57ls" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-q57ls,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7177,SelfLink:/api/v1/namespaces/deployment-7177/pods/nginx-deployment-7b8c6f4498-q57ls,UID:ecbe89a2-66c9-4a08-8b8c-3e7067fcfc81,ResourceVersion:17012447,Generation:0,CreationTimestamp:2019-12-17 13:21:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 44df176f-b46e-4e58-ba06-fd468da6dc37 0xc000d90c07 0xc000d90c08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6627b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6627b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6627b true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000d90c90} {node.kubernetes.io/unreachable Exists NoExecute 0xc000d90cb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:22:01 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 17 13:22:09.956: INFO: Pod "nginx-deployment-7b8c6f4498-s84bd" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-s84bd,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7177,SelfLink:/api/v1/namespaces/deployment-7177/pods/nginx-deployment-7b8c6f4498-s84bd,UID:1391d10f-cd06-443c-9f01-8997cca86496,ResourceVersion:17012295,Generation:0,CreationTimestamp:2019-12-17 13:21:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 44df176f-b46e-4e58-ba06-fd468da6dc37 0xc000d90d37 0xc000d90d38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6627b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6627b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6627b true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000d90e10} {node.kubernetes.io/unreachable Exists NoExecute 0xc000d90e30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:21:18 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:21:46 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:21:46 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:21:18 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.4,StartTime:2019-12-17 13:21:18 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-17 13:21:45 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://a01b8b208bff6069f2a233d97979e9955a9b54d63eeb7c755d22bd5f72d98de6}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 17 13:22:09.957: INFO: Pod "nginx-deployment-7b8c6f4498-slbl7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-slbl7,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7177,SelfLink:/api/v1/namespaces/deployment-7177/pods/nginx-deployment-7b8c6f4498-slbl7,UID:749cc5a9-c7df-4248-9d11-909f03dea1a9,ResourceVersion:17012429,Generation:0,CreationTimestamp:2019-12-17 13:21:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 44df176f-b46e-4e58-ba06-fd468da6dc37 0xc000d90f07 0xc000d90f08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6627b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6627b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6627b true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000d910f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000d91170}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:21:58 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 17 13:22:09.957: INFO: Pod "nginx-deployment-7b8c6f4498-vjjdj" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-vjjdj,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7177,SelfLink:/api/v1/namespaces/deployment-7177/pods/nginx-deployment-7b8c6f4498-vjjdj,UID:ca1b21a5-e0d0-4526-be3d-5f0cc943f2cb,ResourceVersion:17012325,Generation:0,CreationTimestamp:2019-12-17 13:21:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 44df176f-b46e-4e58-ba06-fd468da6dc37 0xc000d912b7 0xc000d912b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6627b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6627b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6627b true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000d913f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000d91410}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:21:18 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:21:49 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:21:49 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:21:17 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.7,StartTime:2019-12-17 13:21:18 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-17 13:21:48 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://6e6b013da2ce013ae14467eb568b83416ea958920c68a3f2865f2be4c5d81fee}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 17 13:22:09.957: INFO: Pod "nginx-deployment-7b8c6f4498-vzwxb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-vzwxb,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7177,SelfLink:/api/v1/namespaces/deployment-7177/pods/nginx-deployment-7b8c6f4498-vzwxb,UID:d650fedc-fadc-45f6-93a7-0c2fdc935f8c,ResourceVersion:17012430,Generation:0,CreationTimestamp:2019-12-17 13:21:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 44df176f-b46e-4e58-ba06-fd468da6dc37 0xc000d91657 0xc000d91658}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6627b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6627b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6627b true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000d916d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000d916f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:21:58 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 17 13:22:09.957: INFO: Pod "nginx-deployment-7b8c6f4498-w5mm7" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-w5mm7,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7177,SelfLink:/api/v1/namespaces/deployment-7177/pods/nginx-deployment-7b8c6f4498-w5mm7,UID:76d7818c-f6ca-4155-95df-39fb6c089ac4,ResourceVersion:17012330,Generation:0,CreationTimestamp:2019-12-17 13:21:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 44df176f-b46e-4e58-ba06-fd468da6dc37 0xc000d918a7 0xc000d918a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6627b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6627b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6627b true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000d91910} {node.kubernetes.io/unreachable Exists NoExecute 0xc000d919a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:21:18 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:21:49 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:21:49 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:21:18 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.5,StartTime:2019-12-17 13:21:18 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-17 13:21:48 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://15470b1c33de044722254e306b1c7eb8911d77ddf2fa7fcc579705768e7c5ab6}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 13:22:09.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7177" for this suite. Dec 17 13:23:03.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 13:23:03.956: INFO: namespace deployment-7177 deletion completed in 51.536694859s • [SLOW TEST:106.144 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 13:23:03.957: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Dec 17 13:23:04.153: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Registering the sample API server. Dec 17 13:23:05.042: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Dec 17 13:23:07.262: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185785, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185785, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185785, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185784, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 17 13:23:09.274: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185785, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185785, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185785, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185784, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 17 13:23:11.276: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185785, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185785, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185785, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185784, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 17 13:23:13.303: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185785, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185785, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185785, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185784, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 17 13:23:15.271: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185785, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185785, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185785, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712185784, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 17 13:23:21.806: INFO: Waited 4.513743213s for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 13:23:22.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-5695" for this suite. Dec 17 13:23:28.711: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 13:23:28.837: INFO: namespace aggregator-5695 deletion completed in 6.17059428s • [SLOW TEST:24.880 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 13:23:28.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 17 13:23:52.991: INFO: Container started at 2019-12-17 13:23:35 +0000 UTC, pod became ready at 2019-12-17 13:23:51 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 13:23:52.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8001" for this suite. Dec 17 13:24:15.010: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 13:24:15.102: INFO: namespace container-probe-8001 deletion completed in 22.107264084s • [SLOW TEST:46.264 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 13:24:15.102: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override all Dec 17 13:24:15.192: INFO: Waiting up to 5m0s for pod "client-containers-0e6615cd-2c64-4d0c-9d96-4ea70d2c3032" in namespace "containers-5459" to be "success or failure" Dec 17 13:24:15.202: INFO: Pod "client-containers-0e6615cd-2c64-4d0c-9d96-4ea70d2c3032": Phase="Pending", Reason="", readiness=false. Elapsed: 9.613525ms Dec 17 13:24:17.212: INFO: Pod "client-containers-0e6615cd-2c64-4d0c-9d96-4ea70d2c3032": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019594453s Dec 17 13:24:19.220: INFO: Pod "client-containers-0e6615cd-2c64-4d0c-9d96-4ea70d2c3032": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026919433s Dec 17 13:24:21.229: INFO: Pod "client-containers-0e6615cd-2c64-4d0c-9d96-4ea70d2c3032": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036592204s Dec 17 13:24:23.239: INFO: Pod "client-containers-0e6615cd-2c64-4d0c-9d96-4ea70d2c3032": Phase="Pending", Reason="", readiness=false. Elapsed: 8.046630412s Dec 17 13:24:25.250: INFO: Pod "client-containers-0e6615cd-2c64-4d0c-9d96-4ea70d2c3032": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.057646419s STEP: Saw pod success Dec 17 13:24:25.251: INFO: Pod "client-containers-0e6615cd-2c64-4d0c-9d96-4ea70d2c3032" satisfied condition "success or failure" Dec 17 13:24:25.255: INFO: Trying to get logs from node iruya-node pod client-containers-0e6615cd-2c64-4d0c-9d96-4ea70d2c3032 container test-container: STEP: delete the pod Dec 17 13:24:25.347: INFO: Waiting for pod client-containers-0e6615cd-2c64-4d0c-9d96-4ea70d2c3032 to disappear Dec 17 13:24:25.354: INFO: Pod client-containers-0e6615cd-2c64-4d0c-9d96-4ea70d2c3032 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 13:24:25.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5459" for this suite. Dec 17 13:24:31.401: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 13:24:31.542: INFO: namespace containers-5459 deletion completed in 6.178616471s • [SLOW TEST:16.440 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 13:24:31.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Dec 17 13:24:40.256: INFO: Successfully updated pod "labelsupdated9670c5d-c1cc-49dc-8cf5-3f7d28033d9b" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 13:24:44.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5917" for this suite. Dec 17 13:25:22.383: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 13:25:22.526: INFO: namespace projected-5917 deletion completed in 38.17204314s • [SLOW TEST:50.985 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 13:25:22.527: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-7caeea2a-01d8-4785-9a9d-a619b5f9379f in namespace container-probe-6891 Dec 17 13:25:30.696: INFO: Started pod busybox-7caeea2a-01d8-4785-9a9d-a619b5f9379f in namespace container-probe-6891 STEP: checking the pod's current state and verifying that restartCount is present Dec 17 13:25:30.701: INFO: Initial restart count of pod busybox-7caeea2a-01d8-4785-9a9d-a619b5f9379f is 0 Dec 17 13:26:29.059: INFO: Restart count of pod container-probe-6891/busybox-7caeea2a-01d8-4785-9a9d-a619b5f9379f is now 1 (58.357250214s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 13:26:29.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6891" for this suite. Dec 17 13:26:35.157: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 13:26:35.355: INFO: namespace container-probe-6891 deletion completed in 6.240542099s • [SLOW TEST:72.828 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 13:26:35.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating cluster-info Dec 17 13:26:35.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Dec 17 13:26:37.571: INFO: stderr: "" Dec 17 13:26:37.571: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 13:26:37.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6823" for this suite. Dec 17 13:26:43.628: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 13:26:43.759: INFO: namespace kubectl-6823 deletion completed in 6.176747492s • [SLOW TEST:8.404 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 13:26:43.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 17 13:26:43.899: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Dec 17 13:26:43.979: INFO: Pod name sample-pod: Found 0 pods out of 1 Dec 17 13:26:48.990: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Dec 17 13:26:53.006: INFO: Creating deployment "test-rolling-update-deployment" Dec 17 13:26:53.012: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Dec 17 13:26:53.076: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Dec 17 13:26:55.092: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Dec 17 13:26:55.096: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712186013, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712186013, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712186013, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712186013, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 17 13:26:57.102: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712186013, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712186013, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712186013, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712186013, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 17 13:26:59.117: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712186013, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712186013, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712186013, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712186013, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 17 13:27:01.105: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Dec 17 13:27:01.116: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-7534,SelfLink:/apis/apps/v1/namespaces/deployment-7534/deployments/test-rolling-update-deployment,UID:bc6c5d41-b933-4ae4-932c-2d327ccde387,ResourceVersion:17013263,Generation:1,CreationTimestamp:2019-12-17 13:26:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-12-17 13:26:53 +0000 UTC 2019-12-17 13:26:53 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-12-17 13:27:00 +0000 UTC 2019-12-17 13:26:53 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Dec 17 13:27:01.119: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-7534,SelfLink:/apis/apps/v1/namespaces/deployment-7534/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:24108454-3dca-4db7-9e18-46ef0e825df6,ResourceVersion:17013252,Generation:1,CreationTimestamp:2019-12-17 13:26:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment bc6c5d41-b933-4ae4-932c-2d327ccde387 0xc001a33bd7 0xc001a33bd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Dec 17 13:27:01.119: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Dec 17 13:27:01.120: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-7534,SelfLink:/apis/apps/v1/namespaces/deployment-7534/replicasets/test-rolling-update-controller,UID:49940395-8232-41d0-b724-54e43895b3d2,ResourceVersion:17013261,Generation:2,CreationTimestamp:2019-12-17 13:26:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment bc6c5d41-b933-4ae4-932c-2d327ccde387 0xc001a33b07 0xc001a33b08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Dec 17 13:27:01.123: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-ttsgc" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-ttsgc,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-7534,SelfLink:/api/v1/namespaces/deployment-7534/pods/test-rolling-update-deployment-79f6b9d75c-ttsgc,UID:88f17bc6-391d-4d2a-989f-e7683d760431,ResourceVersion:17013251,Generation:0,CreationTimestamp:2019-12-17 13:26:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c 24108454-3dca-4db7-9e18-46ef0e825df6 0xc001b60657 0xc001b60658}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-t84xc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t84xc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-t84xc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001b606d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001b606f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:26:53 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:27:00 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:27:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:26:53 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2019-12-17 13:26:53 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-12-17 13:26:59 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://d01a9883dc15d1c668fdd1e3645102289b0b09b38468fd51c7ad122e7af4d167}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 13:27:01.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7534" for this suite. Dec 17 13:27:07.157: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 13:27:07.313: INFO: namespace deployment-7534 deletion completed in 6.184472977s • [SLOW TEST:23.553 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 13:27:07.313: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating api versions Dec 17 13:27:07.457: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Dec 17 13:27:07.685: INFO: stderr: "" Dec 17 13:27:07.686: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 13:27:07.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3292" for this suite. Dec 17 13:27:13.753: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 13:27:13.945: INFO: namespace kubectl-3292 deletion completed in 6.22508005s • [SLOW TEST:6.632 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 13:27:13.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Dec 17 13:27:21.293: INFO: 0 pods remaining Dec 17 13:27:21.293: INFO: 0 pods has nil DeletionTimestamp Dec 17 13:27:21.293: INFO: STEP: Gathering metrics W1217 13:27:22.546463 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Dec 17 13:27:22.546: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 13:27:22.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7310" for this suite. Dec 17 13:27:32.746: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 13:27:32.875: INFO: namespace gc-7310 deletion completed in 10.282626947s • [SLOW TEST:18.928 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 13:27:32.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Dec 17 13:27:33.143: INFO: Number of nodes with available pods: 0 Dec 17 13:27:33.143: INFO: Node iruya-node is running more than one daemon pod Dec 17 13:27:34.964: INFO: Number of nodes with available pods: 0 Dec 17 13:27:34.964: INFO: Node iruya-node is running more than one daemon pod Dec 17 13:27:35.408: INFO: Number of nodes with available pods: 0 Dec 17 13:27:35.408: INFO: Node iruya-node is running more than one daemon pod Dec 17 13:27:36.172: INFO: Number of nodes with available pods: 0 Dec 17 13:27:36.172: INFO: Node iruya-node is running more than one daemon pod Dec 17 13:27:37.294: INFO: Number of nodes with available pods: 0 Dec 17 13:27:37.294: INFO: Node iruya-node is running more than one daemon pod Dec 17 13:27:38.151: INFO: Number of nodes with available pods: 0 Dec 17 13:27:38.151: INFO: Node iruya-node is running more than one daemon pod Dec 17 13:27:39.579: INFO: Number of nodes with available pods: 0 Dec 17 13:27:39.579: INFO: Node iruya-node is running more than one daemon pod Dec 17 13:27:40.550: INFO: Number of nodes with available pods: 0 Dec 17 13:27:40.550: INFO: Node iruya-node is running more than one daemon pod Dec 17 13:27:41.158: INFO: Number of nodes with available pods: 0 Dec 17 13:27:41.158: INFO: Node iruya-node is running more than one daemon pod Dec 17 13:27:42.156: INFO: Number of nodes with available pods: 0 Dec 17 13:27:42.156: INFO: Node iruya-node is running more than one daemon pod Dec 17 13:27:43.183: INFO: Number of nodes with available pods: 1 Dec 17 13:27:43.183: INFO: Node iruya-node is running more than one daemon pod Dec 17 13:27:44.166: INFO: Number of nodes with available pods: 1 Dec 17 13:27:44.166: INFO: Node iruya-node is running more than one daemon pod Dec 17 13:27:45.163: INFO: Number of nodes with available pods: 2 Dec 17 13:27:45.163: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Dec 17 13:27:45.207: INFO: Number of nodes with available pods: 1 Dec 17 13:27:45.208: INFO: Node iruya-node is running more than one daemon pod Dec 17 13:27:46.226: INFO: Number of nodes with available pods: 1 Dec 17 13:27:46.226: INFO: Node iruya-node is running more than one daemon pod Dec 17 13:27:47.223: INFO: Number of nodes with available pods: 1 Dec 17 13:27:47.223: INFO: Node iruya-node is running more than one daemon pod Dec 17 13:27:48.235: INFO: Number of nodes with available pods: 1 Dec 17 13:27:48.235: INFO: Node iruya-node is running more than one daemon pod Dec 17 13:27:49.224: INFO: Number of nodes with available pods: 1 Dec 17 13:27:49.224: INFO: Node iruya-node is running more than one daemon pod Dec 17 13:27:50.220: INFO: Number of nodes with available pods: 1 Dec 17 13:27:50.220: INFO: Node iruya-node is running more than one daemon pod Dec 17 13:27:51.230: INFO: Number of nodes with available pods: 1 Dec 17 13:27:51.230: INFO: Node iruya-node is running more than one daemon pod Dec 17 13:27:52.217: INFO: Number of nodes with available pods: 1 Dec 17 13:27:52.217: INFO: Node iruya-node is running more than one daemon pod Dec 17 13:27:53.246: INFO: Number of nodes with available pods: 1 Dec 17 13:27:53.246: INFO: Node iruya-node is running more than one daemon pod Dec 17 13:27:54.222: INFO: Number of nodes with available pods: 1 Dec 17 13:27:54.222: INFO: Node iruya-node is running more than one daemon pod Dec 17 13:27:55.226: INFO: Number of nodes with available pods: 1 Dec 17 13:27:55.226: INFO: Node iruya-node is running more than one daemon pod Dec 17 13:27:56.237: INFO: Number of nodes with available pods: 1 Dec 17 13:27:56.237: INFO: Node iruya-node is running more than one daemon pod Dec 17 13:27:57.225: INFO: Number of nodes with available pods: 1 Dec 17 13:27:57.225: INFO: Node iruya-node is running more than one daemon pod Dec 17 13:27:58.226: INFO: Number of nodes with available pods: 1 Dec 17 13:27:58.226: INFO: Node iruya-node is running more than one daemon pod Dec 17 13:27:59.226: INFO: Number of nodes with available pods: 1 Dec 17 13:27:59.227: INFO: Node iruya-node is running more than one daemon pod Dec 17 13:28:00.233: INFO: Number of nodes with available pods: 1 Dec 17 13:28:00.233: INFO: Node iruya-node is running more than one daemon pod Dec 17 13:28:01.226: INFO: Number of nodes with available pods: 1 Dec 17 13:28:01.226: INFO: Node iruya-node is running more than one daemon pod Dec 17 13:28:02.238: INFO: Number of nodes with available pods: 1 Dec 17 13:28:02.238: INFO: Node iruya-node is running more than one daemon pod Dec 17 13:28:03.256: INFO: Number of nodes with available pods: 1 Dec 17 13:28:03.256: INFO: Node iruya-node is running more than one daemon pod Dec 17 13:28:04.219: INFO: Number of nodes with available pods: 2 Dec 17 13:28:04.219: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6240, will wait for the garbage collector to delete the pods Dec 17 13:28:04.291: INFO: Deleting DaemonSet.extensions daemon-set took: 14.215691ms Dec 17 13:28:04.592: INFO: Terminating DaemonSet.extensions daemon-set pods took: 301.267075ms Dec 17 13:28:16.605: INFO: Number of nodes with available pods: 0 Dec 17 13:28:16.605: INFO: Number of running nodes: 0, number of available pods: 0 Dec 17 13:28:16.615: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6240/daemonsets","resourceVersion":"17013558"},"items":null} Dec 17 13:28:16.621: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6240/pods","resourceVersion":"17013558"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 13:28:16.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6240" for this suite. Dec 17 13:28:22.672: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 13:28:22.783: INFO: namespace daemonsets-6240 deletion completed in 6.133455427s • [SLOW TEST:49.908 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 13:28:22.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 17 13:28:22.933: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ca24f3ff-d739-4fa5-a541-3270958658ea" in namespace "downward-api-6875" to be "success or failure" Dec 17 13:28:22.944: INFO: Pod "downwardapi-volume-ca24f3ff-d739-4fa5-a541-3270958658ea": Phase="Pending", Reason="", readiness=false. Elapsed: 10.595675ms Dec 17 13:28:24.954: INFO: Pod "downwardapi-volume-ca24f3ff-d739-4fa5-a541-3270958658ea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020201054s Dec 17 13:28:26.969: INFO: Pod "downwardapi-volume-ca24f3ff-d739-4fa5-a541-3270958658ea": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035055599s Dec 17 13:28:28.977: INFO: Pod "downwardapi-volume-ca24f3ff-d739-4fa5-a541-3270958658ea": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04361388s Dec 17 13:28:30.991: INFO: Pod "downwardapi-volume-ca24f3ff-d739-4fa5-a541-3270958658ea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.057859089s STEP: Saw pod success Dec 17 13:28:30.992: INFO: Pod "downwardapi-volume-ca24f3ff-d739-4fa5-a541-3270958658ea" satisfied condition "success or failure" Dec 17 13:28:30.998: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-ca24f3ff-d739-4fa5-a541-3270958658ea container client-container: STEP: delete the pod Dec 17 13:28:31.078: INFO: Waiting for pod downwardapi-volume-ca24f3ff-d739-4fa5-a541-3270958658ea to disappear Dec 17 13:28:31.082: INFO: Pod downwardapi-volume-ca24f3ff-d739-4fa5-a541-3270958658ea no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 13:28:31.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6875" for this suite. Dec 17 13:28:37.136: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 13:28:37.266: INFO: namespace downward-api-6875 deletion completed in 6.178943914s • [SLOW TEST:14.482 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 13:28:37.266: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Dec 17 13:28:37.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-771' Dec 17 13:28:37.602: INFO: stderr: "" Dec 17 13:28:37.602: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Dec 17 13:28:47.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-771 -o json' Dec 17 13:28:47.836: INFO: stderr: "" Dec 17 13:28:47.836: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2019-12-17T13:28:37Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"kubectl-771\",\n \"resourceVersion\": \"17013654\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-771/pods/e2e-test-nginx-pod\",\n \"uid\": \"52c09a69-2e4a-4568-819b-daa935ee28fb\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-48v4v\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"iruya-node\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-48v4v\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-48v4v\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-12-17T13:28:37Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-12-17T13:28:45Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-12-17T13:28:45Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-12-17T13:28:37Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"docker://0843187d29ab7717c1a62af5288dfd98a090d4145047f5d179f764313e7e8bbc\",\n \"image\": \"nginx:1.14-alpine\",\n \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2019-12-17T13:28:44Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.96.3.65\",\n \"phase\": \"Running\",\n \"podIP\": \"10.44.0.1\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2019-12-17T13:28:37Z\"\n }\n}\n" STEP: replace the image in the pod Dec 17 13:28:47.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-771' Dec 17 13:28:48.371: INFO: stderr: "" Dec 17 13:28:48.372: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726 Dec 17 13:28:48.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-771' Dec 17 13:28:55.675: INFO: stderr: "" Dec 17 13:28:55.675: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 13:28:55.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-771" for this suite. Dec 17 13:29:01.719: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 13:29:01.921: INFO: namespace kubectl-771 deletion completed in 6.234703741s • [SLOW TEST:24.655 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 13:29:01.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Dec 17 13:29:02.072: INFO: Number of nodes with available pods: 0 Dec 17 13:29:02.072: INFO: Node iruya-node is running more than one daemon pod Dec 17 13:29:03.592: INFO: Number of nodes with available pods: 0 Dec 17 13:29:03.592: INFO: Node iruya-node is running more than one daemon pod Dec 17 13:29:04.217: INFO: Number of nodes with available pods: 0 Dec 17 13:29:04.218: INFO: Node iruya-node is running more than one daemon pod Dec 17 13:29:05.376: INFO: Number of nodes with available pods: 0 Dec 17 13:29:05.376: INFO: Node iruya-node is running more than one daemon pod Dec 17 13:29:06.090: INFO: Number of nodes with available pods: 0 Dec 17 13:29:06.090: INFO: Node iruya-node is running more than one daemon pod Dec 17 13:29:08.109: INFO: Number of nodes with available pods: 0 Dec 17 13:29:08.109: INFO: Node iruya-node is running more than one daemon pod Dec 17 13:29:09.223: INFO: Number of nodes with available pods: 0 Dec 17 13:29:09.223: INFO: Node iruya-node is running more than one daemon pod Dec 17 13:29:10.360: INFO: Number of nodes with available pods: 0 Dec 17 13:29:10.361: INFO: Node iruya-node is running more than one daemon pod Dec 17 13:29:11.086: INFO: Number of nodes with available pods: 1 Dec 17 13:29:11.086: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Dec 17 13:29:12.114: INFO: Number of nodes with available pods: 2 Dec 17 13:29:12.114: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Dec 17 13:29:12.163: INFO: Number of nodes with available pods: 1 Dec 17 13:29:12.163: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Dec 17 13:29:13.177: INFO: Number of nodes with available pods: 1 Dec 17 13:29:13.177: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Dec 17 13:29:14.178: INFO: Number of nodes with available pods: 1 Dec 17 13:29:14.178: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Dec 17 13:29:15.185: INFO: Number of nodes with available pods: 1 Dec 17 13:29:15.185: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Dec 17 13:29:16.183: INFO: Number of nodes with available pods: 1 Dec 17 13:29:16.183: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Dec 17 13:29:17.177: INFO: Number of nodes with available pods: 1 Dec 17 13:29:17.177: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Dec 17 13:29:18.456: INFO: Number of nodes with available pods: 1 Dec 17 13:29:18.456: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Dec 17 13:29:19.181: INFO: Number of nodes with available pods: 1 Dec 17 13:29:19.181: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Dec 17 13:29:20.176: INFO: Number of nodes with available pods: 1 Dec 17 13:29:20.176: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Dec 17 13:29:21.203: INFO: Number of nodes with available pods: 2 Dec 17 13:29:21.204: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4181, will wait for the garbage collector to delete the pods Dec 17 13:29:21.287: INFO: Deleting DaemonSet.extensions daemon-set took: 16.53114ms Dec 17 13:29:21.587: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.68364ms Dec 17 13:29:37.898: INFO: Number of nodes with available pods: 0 Dec 17 13:29:37.899: INFO: Number of running nodes: 0, number of available pods: 0 Dec 17 13:29:37.903: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4181/daemonsets","resourceVersion":"17013799"},"items":null} Dec 17 13:29:37.905: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4181/pods","resourceVersion":"17013799"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 13:29:37.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4181" for this suite. Dec 17 13:29:43.966: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 13:29:44.070: INFO: namespace daemonsets-4181 deletion completed in 6.130582189s • [SLOW TEST:42.148 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 13:29:44.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-9e06009a-ce76-41c1-a550-a85017f474ce STEP: Creating a pod to test consume configMaps Dec 17 13:29:44.251: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8176a4f5-142d-416f-b4e0-e465ac9c311d" in namespace "projected-8143" to be "success or failure" Dec 17 13:29:44.266: INFO: Pod "pod-projected-configmaps-8176a4f5-142d-416f-b4e0-e465ac9c311d": Phase="Pending", Reason="", readiness=false. Elapsed: 15.422774ms Dec 17 13:29:46.276: INFO: Pod "pod-projected-configmaps-8176a4f5-142d-416f-b4e0-e465ac9c311d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025505893s Dec 17 13:29:48.305: INFO: Pod "pod-projected-configmaps-8176a4f5-142d-416f-b4e0-e465ac9c311d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054396039s Dec 17 13:29:50.320: INFO: Pod "pod-projected-configmaps-8176a4f5-142d-416f-b4e0-e465ac9c311d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.069122886s Dec 17 13:29:52.329: INFO: Pod "pod-projected-configmaps-8176a4f5-142d-416f-b4e0-e465ac9c311d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.078256109s STEP: Saw pod success Dec 17 13:29:52.330: INFO: Pod "pod-projected-configmaps-8176a4f5-142d-416f-b4e0-e465ac9c311d" satisfied condition "success or failure" Dec 17 13:29:52.375: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-8176a4f5-142d-416f-b4e0-e465ac9c311d container projected-configmap-volume-test: STEP: delete the pod Dec 17 13:29:52.550: INFO: Waiting for pod pod-projected-configmaps-8176a4f5-142d-416f-b4e0-e465ac9c311d to disappear Dec 17 13:29:52.558: INFO: Pod pod-projected-configmaps-8176a4f5-142d-416f-b4e0-e465ac9c311d no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 13:29:52.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8143" for this suite. Dec 17 13:29:58.609: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 13:29:58.727: INFO: namespace projected-8143 deletion completed in 6.16242304s • [SLOW TEST:14.656 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 13:29:58.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 17 13:29:58.899: INFO: Waiting up to 5m0s for pod "downwardapi-volume-28be6f4b-8ba3-448e-8a23-b8e4d5ce8586" in namespace "projected-3184" to be "success or failure" Dec 17 13:29:58.990: INFO: Pod "downwardapi-volume-28be6f4b-8ba3-448e-8a23-b8e4d5ce8586": Phase="Pending", Reason="", readiness=false. Elapsed: 91.111696ms Dec 17 13:30:01.013: INFO: Pod "downwardapi-volume-28be6f4b-8ba3-448e-8a23-b8e4d5ce8586": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11333925s Dec 17 13:30:03.024: INFO: Pod "downwardapi-volume-28be6f4b-8ba3-448e-8a23-b8e4d5ce8586": Phase="Pending", Reason="", readiness=false. Elapsed: 4.124790292s Dec 17 13:30:05.044: INFO: Pod "downwardapi-volume-28be6f4b-8ba3-448e-8a23-b8e4d5ce8586": Phase="Pending", Reason="", readiness=false. Elapsed: 6.144336163s Dec 17 13:30:07.060: INFO: Pod "downwardapi-volume-28be6f4b-8ba3-448e-8a23-b8e4d5ce8586": Phase="Running", Reason="", readiness=true. Elapsed: 8.160787614s Dec 17 13:30:09.070: INFO: Pod "downwardapi-volume-28be6f4b-8ba3-448e-8a23-b8e4d5ce8586": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.170664259s STEP: Saw pod success Dec 17 13:30:09.070: INFO: Pod "downwardapi-volume-28be6f4b-8ba3-448e-8a23-b8e4d5ce8586" satisfied condition "success or failure" Dec 17 13:30:09.075: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-28be6f4b-8ba3-448e-8a23-b8e4d5ce8586 container client-container: STEP: delete the pod Dec 17 13:30:09.171: INFO: Waiting for pod downwardapi-volume-28be6f4b-8ba3-448e-8a23-b8e4d5ce8586 to disappear Dec 17 13:30:09.234: INFO: Pod downwardapi-volume-28be6f4b-8ba3-448e-8a23-b8e4d5ce8586 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 13:30:09.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3184" for this suite. Dec 17 13:30:15.293: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 13:30:15.499: INFO: namespace projected-3184 deletion completed in 6.252146172s • [SLOW TEST:16.772 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 13:30:15.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-3ac5afac-7ee6-4d79-833b-32dcc795fcbf STEP: Creating the pod STEP: Updating configmap configmap-test-upd-3ac5afac-7ee6-4d79-833b-32dcc795fcbf STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 13:31:49.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1091" for this suite. Dec 17 13:32:11.568: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 13:32:11.699: INFO: namespace configmap-1091 deletion completed in 22.164002376s • [SLOW TEST:116.199 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 13:32:11.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 13:32:18.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-8020" for this suite. Dec 17 13:32:24.296: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 13:32:24.439: INFO: namespace namespaces-8020 deletion completed in 6.172344122s STEP: Destroying namespace "nsdeletetest-5623" for this suite. Dec 17 13:32:24.442: INFO: Namespace nsdeletetest-5623 was already deleted STEP: Destroying namespace "nsdeletetest-7237" for this suite. Dec 17 13:32:30.472: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 13:32:30.660: INFO: namespace nsdeletetest-7237 deletion completed in 6.217795746s • [SLOW TEST:18.961 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 13:32:30.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Dec 17 13:32:38.914: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Dec 17 13:32:49.089: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 13:32:49.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-463" for this suite. Dec 17 13:32:55.133: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 13:32:55.307: INFO: namespace pods-463 deletion completed in 6.202582491s • [SLOW TEST:24.646 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 13:32:55.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Dec 17 13:33:04.635: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 13:33:04.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6840" for this suite. Dec 17 13:33:10.717: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 13:33:10.849: INFO: namespace container-runtime-6840 deletion completed in 6.153054534s • [SLOW TEST:15.541 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 13:33:10.850: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-g9928 in namespace proxy-3503 I1217 13:33:11.065775 8 runners.go:180] Created replication controller with name: proxy-service-g9928, namespace: proxy-3503, replica count: 1 I1217 13:33:12.117359 8 runners.go:180] proxy-service-g9928 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1217 13:33:13.117983 8 runners.go:180] proxy-service-g9928 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1217 13:33:14.118592 8 runners.go:180] proxy-service-g9928 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1217 13:33:15.119077 8 runners.go:180] proxy-service-g9928 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1217 13:33:16.119435 8 runners.go:180] proxy-service-g9928 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1217 13:33:17.119825 8 runners.go:180] proxy-service-g9928 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1217 13:33:18.120302 8 runners.go:180] proxy-service-g9928 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1217 13:33:19.121099 8 runners.go:180] proxy-service-g9928 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1217 13:33:20.121681 8 runners.go:180] proxy-service-g9928 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1217 13:33:21.122201 8 runners.go:180] proxy-service-g9928 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1217 13:33:22.122542 8 runners.go:180] proxy-service-g9928 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1217 13:33:23.123108 8 runners.go:180] proxy-service-g9928 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1217 13:33:24.124031 8 runners.go:180] proxy-service-g9928 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Dec 17 13:33:24.135: INFO: setup took 13.181420012s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Dec 17 13:33:24.181: INFO: (0) /api/v1/namespaces/proxy-3503/pods/proxy-service-g9928-qpvqj:162/proxy/: bar (200; 45.020287ms) Dec 17 13:33:24.180: INFO: (0) /api/v1/namespaces/proxy-3503/pods/http:proxy-service-g9928-qpvqj:160/proxy/: foo (200; 44.958073ms) Dec 17 13:33:24.181: INFO: (0) /api/v1/namespaces/proxy-3503/services/http:proxy-service-g9928:portname1/proxy/: foo (200; 45.083314ms) Dec 17 13:33:24.181: INFO: (0) /api/v1/namespaces/proxy-3503/pods/proxy-service-g9928-qpvqj:160/proxy/: foo (200; 45.489978ms) Dec 17 13:33:24.180: INFO: (0) /api/v1/namespaces/proxy-3503/pods/http:proxy-service-g9928-qpvqj:162/proxy/: bar (200; 44.995216ms) Dec 17 13:33:24.181: INFO: (0) /api/v1/namespaces/proxy-3503/pods/proxy-service-g9928-qpvqj:1080/proxy/: test<... (200; 45.260062ms) Dec 17 13:33:24.181: INFO: (0) /api/v1/namespaces/proxy-3503/services/proxy-service-g9928:portname2/proxy/: bar (200; 45.365852ms) Dec 17 13:33:24.181: INFO: (0) /api/v1/namespaces/proxy-3503/pods/proxy-service-g9928-qpvqj/proxy/: test (200; 45.359238ms) Dec 17 13:33:24.181: INFO: (0) /api/v1/namespaces/proxy-3503/pods/http:proxy-service-g9928-qpvqj:1080/proxy/: ... (200; 46.157678ms) Dec 17 13:33:24.184: INFO: (0) /api/v1/namespaces/proxy-3503/services/http:proxy-service-g9928:portname2/proxy/: bar (200; 48.775524ms) Dec 17 13:33:24.185: INFO: (0) /api/v1/namespaces/proxy-3503/services/proxy-service-g9928:portname1/proxy/: foo (200; 49.867812ms) Dec 17 13:33:24.197: INFO: (0) /api/v1/namespaces/proxy-3503/services/https:proxy-service-g9928:tlsportname2/proxy/: tls qux (200; 62.200012ms) Dec 17 13:33:24.198: INFO: (0) /api/v1/namespaces/proxy-3503/pods/https:proxy-service-g9928-qpvqj:460/proxy/: tls baz (200; 62.025192ms) Dec 17 13:33:24.198: INFO: (0) /api/v1/namespaces/proxy-3503/services/https:proxy-service-g9928:tlsportname1/proxy/: tls baz (200; 61.894945ms) Dec 17 13:33:24.198: INFO: (0) /api/v1/namespaces/proxy-3503/pods/https:proxy-service-g9928-qpvqj:443/proxy/: ... (200; 29.296858ms) Dec 17 13:33:24.227: INFO: (1) /api/v1/namespaces/proxy-3503/pods/proxy-service-g9928-qpvqj:1080/proxy/: test<... (200; 28.973479ms) Dec 17 13:33:24.227: INFO: (1) /api/v1/namespaces/proxy-3503/pods/http:proxy-service-g9928-qpvqj:160/proxy/: foo (200; 28.883603ms) Dec 17 13:33:24.227: INFO: (1) /api/v1/namespaces/proxy-3503/pods/proxy-service-g9928-qpvqj:162/proxy/: bar (200; 29.230208ms) Dec 17 13:33:24.228: INFO: (1) /api/v1/namespaces/proxy-3503/pods/https:proxy-service-g9928-qpvqj:443/proxy/: test (200; 33.791404ms) Dec 17 13:33:24.232: INFO: (1) /api/v1/namespaces/proxy-3503/services/https:proxy-service-g9928:tlsportname2/proxy/: tls qux (200; 34.468498ms) Dec 17 13:33:24.234: INFO: (1) /api/v1/namespaces/proxy-3503/services/https:proxy-service-g9928:tlsportname1/proxy/: tls baz (200; 36.056067ms) Dec 17 13:33:24.234: INFO: (1) /api/v1/namespaces/proxy-3503/services/proxy-service-g9928:portname1/proxy/: foo (200; 36.016789ms) Dec 17 13:33:24.234: INFO: (1) /api/v1/namespaces/proxy-3503/services/proxy-service-g9928:portname2/proxy/: bar (200; 36.210219ms) Dec 17 13:33:24.235: INFO: (1) /api/v1/namespaces/proxy-3503/services/http:proxy-service-g9928:portname1/proxy/: foo (200; 36.988414ms) Dec 17 13:33:24.255: INFO: (2) /api/v1/namespaces/proxy-3503/pods/proxy-service-g9928-qpvqj:162/proxy/: bar (200; 19.768538ms) Dec 17 13:33:24.256: INFO: (2) /api/v1/namespaces/proxy-3503/pods/http:proxy-service-g9928-qpvqj:162/proxy/: bar (200; 21.187096ms) Dec 17 13:33:24.258: INFO: (2) /api/v1/namespaces/proxy-3503/pods/https:proxy-service-g9928-qpvqj:462/proxy/: tls qux (200; 22.617365ms) Dec 17 13:33:24.258: INFO: (2) /api/v1/namespaces/proxy-3503/pods/proxy-service-g9928-qpvqj:160/proxy/: foo (200; 22.985319ms) Dec 17 13:33:24.258: INFO: (2) /api/v1/namespaces/proxy-3503/services/https:proxy-service-g9928:tlsportname2/proxy/: tls qux (200; 22.8404ms) Dec 17 13:33:24.258: INFO: (2) /api/v1/namespaces/proxy-3503/services/http:proxy-service-g9928:portname1/proxy/: foo (200; 22.859557ms) Dec 17 13:33:24.258: INFO: (2) /api/v1/namespaces/proxy-3503/pods/proxy-service-g9928-qpvqj/proxy/: test (200; 23.312479ms) Dec 17 13:33:24.258: INFO: (2) /api/v1/namespaces/proxy-3503/pods/https:proxy-service-g9928-qpvqj:443/proxy/: ... (200; 30.245255ms) Dec 17 13:33:24.266: INFO: (2) /api/v1/namespaces/proxy-3503/services/proxy-service-g9928:portname2/proxy/: bar (200; 30.336927ms) Dec 17 13:33:24.266: INFO: (2) /api/v1/namespaces/proxy-3503/pods/proxy-service-g9928-qpvqj:1080/proxy/: test<... (200; 30.601286ms) Dec 17 13:33:24.266: INFO: (2) /api/v1/namespaces/proxy-3503/pods/http:proxy-service-g9928-qpvqj:160/proxy/: foo (200; 30.906412ms) Dec 17 13:33:24.267: INFO: (2) /api/v1/namespaces/proxy-3503/services/proxy-service-g9928:portname1/proxy/: foo (200; 31.693333ms) Dec 17 13:33:24.321: INFO: (3) /api/v1/namespaces/proxy-3503/pods/proxy-service-g9928-qpvqj/proxy/: test (200; 52.438687ms) Dec 17 13:33:24.329: INFO: (3) /api/v1/namespaces/proxy-3503/pods/proxy-service-g9928-qpvqj:162/proxy/: bar (200; 61.134007ms) Dec 17 13:33:24.329: INFO: (3) /api/v1/namespaces/proxy-3503/pods/https:proxy-service-g9928-qpvqj:462/proxy/: tls qux (200; 61.115862ms) Dec 17 13:33:24.329: INFO: (3) /api/v1/namespaces/proxy-3503/services/proxy-service-g9928:portname1/proxy/: foo (200; 60.676803ms) Dec 17 13:33:24.329: INFO: (3) /api/v1/namespaces/proxy-3503/pods/https:proxy-service-g9928-qpvqj:443/proxy/: ... (200; 60.945758ms) Dec 17 13:33:24.329: INFO: (3) /api/v1/namespaces/proxy-3503/pods/proxy-service-g9928-qpvqj:160/proxy/: foo (200; 61.958638ms) Dec 17 13:33:24.330: INFO: (3) /api/v1/namespaces/proxy-3503/services/proxy-service-g9928:portname2/proxy/: bar (200; 61.855399ms) Dec 17 13:33:24.331: INFO: (3) /api/v1/namespaces/proxy-3503/pods/proxy-service-g9928-qpvqj:1080/proxy/: test<... (200; 63.075276ms) Dec 17 13:33:24.331: INFO: (3) /api/v1/namespaces/proxy-3503/services/http:proxy-service-g9928:portname2/proxy/: bar (200; 63.006444ms) Dec 17 13:33:24.331: INFO: (3) /api/v1/namespaces/proxy-3503/services/https:proxy-service-g9928:tlsportname1/proxy/: tls baz (200; 63.355099ms) Dec 17 13:33:24.332: INFO: (3) /api/v1/namespaces/proxy-3503/pods/https:proxy-service-g9928-qpvqj:460/proxy/: tls baz (200; 64.279692ms) Dec 17 13:33:24.332: INFO: (3) /api/v1/namespaces/proxy-3503/pods/http:proxy-service-g9928-qpvqj:162/proxy/: bar (200; 64.681456ms) Dec 17 13:33:24.343: INFO: (4) /api/v1/namespaces/proxy-3503/pods/http:proxy-service-g9928-qpvqj:160/proxy/: foo (200; 10.163694ms) Dec 17 13:33:24.343: INFO: (4) /api/v1/namespaces/proxy-3503/pods/proxy-service-g9928-qpvqj:162/proxy/: bar (200; 10.166173ms) Dec 17 13:33:24.343: INFO: (4) /api/v1/namespaces/proxy-3503/pods/proxy-service-g9928-qpvqj/proxy/: test (200; 10.336098ms) Dec 17 13:33:24.344: INFO: (4) /api/v1/namespaces/proxy-3503/pods/https:proxy-service-g9928-qpvqj:462/proxy/: tls qux (200; 11.765882ms) Dec 17 13:33:24.344: INFO: (4) /api/v1/namespaces/proxy-3503/pods/https:proxy-service-g9928-qpvqj:443/proxy/: ... (200; 11.79042ms) Dec 17 13:33:24.344: INFO: (4) /api/v1/namespaces/proxy-3503/pods/proxy-service-g9928-qpvqj:1080/proxy/: test<... (200; 11.972436ms) Dec 17 13:33:24.344: INFO: (4) /api/v1/namespaces/proxy-3503/pods/http:proxy-service-g9928-qpvqj:162/proxy/: bar (200; 12.151097ms) Dec 17 13:33:24.345: INFO: (4) /api/v1/namespaces/proxy-3503/pods/https:proxy-service-g9928-qpvqj:460/proxy/: tls baz (200; 12.115545ms) Dec 17 13:33:24.350: INFO: (4) /api/v1/namespaces/proxy-3503/pods/proxy-service-g9928-qpvqj:160/proxy/: foo (200; 17.61528ms) Dec 17 13:33:24.351: INFO: (4) /api/v1/namespaces/proxy-3503/services/http:proxy-service-g9928:portname1/proxy/: foo (200; 18.950947ms) Dec 17 13:33:24.352: INFO: (4) /api/v1/namespaces/proxy-3503/services/proxy-service-g9928:portname1/proxy/: foo (200; 19.775339ms) Dec 17 13:33:24.352: INFO: (4) /api/v1/namespaces/proxy-3503/services/proxy-service-g9928:portname2/proxy/: bar (200; 19.850033ms) Dec 17 13:33:24.353: INFO: (4) /api/v1/namespaces/proxy-3503/services/http:proxy-service-g9928:portname2/proxy/: bar (200; 20.632268ms) Dec 17 13:33:24.354: INFO: (4) /api/v1/namespaces/proxy-3503/services/https:proxy-service-g9928:tlsportname1/proxy/: tls baz (200; 21.971303ms) Dec 17 13:33:24.356: INFO: (4) /api/v1/namespaces/proxy-3503/services/https:proxy-service-g9928:tlsportname2/proxy/: tls qux (200; 23.623396ms) Dec 17 13:33:24.364: INFO: (5) /api/v1/namespaces/proxy-3503/pods/proxy-service-g9928-qpvqj:162/proxy/: bar (200; 7.813927ms) Dec 17 13:33:24.364: INFO: (5) /api/v1/namespaces/proxy-3503/pods/https:proxy-service-g9928-qpvqj:443/proxy/: ... (200; 14.12292ms) Dec 17 13:33:24.373: INFO: (5) /api/v1/namespaces/proxy-3503/services/https:proxy-service-g9928:tlsportname1/proxy/: tls baz (200; 16.608802ms) Dec 17 13:33:24.378: INFO: (5) /api/v1/namespaces/proxy-3503/services/http:proxy-service-g9928:portname1/proxy/: foo (200; 21.834222ms) Dec 17 13:33:24.379: INFO: (5) /api/v1/namespaces/proxy-3503/services/proxy-service-g9928:portname1/proxy/: foo (200; 23.116069ms) Dec 17 13:33:24.379: INFO: (5) /api/v1/namespaces/proxy-3503/pods/proxy-service-g9928-qpvqj/proxy/: test (200; 22.852286ms) Dec 17 13:33:24.379: INFO: (5) /api/v1/namespaces/proxy-3503/services/http:proxy-service-g9928:portname2/proxy/: bar (200; 22.862133ms) Dec 17 13:33:24.379: INFO: (5) /api/v1/namespaces/proxy-3503/pods/proxy-service-g9928-qpvqj:1080/proxy/: test<... (200; 22.883312ms) Dec 17 13:33:24.380: INFO: (5) /api/v1/namespaces/proxy-3503/pods/https:proxy-service-g9928-qpvqj:460/proxy/: tls baz (200; 23.213209ms) Dec 17 13:33:24.380: INFO: (5) /api/v1/namespaces/proxy-3503/pods/http:proxy-service-g9928-qpvqj:162/proxy/: bar (200; 23.286612ms) Dec 17 13:33:24.380: INFO: (5) /api/v1/namespaces/proxy-3503/services/https:proxy-service-g9928:tlsportname2/proxy/: tls qux (200; 23.459685ms) Dec 17 13:33:24.393: INFO: (6) /api/v1/namespaces/proxy-3503/pods/proxy-service-g9928-qpvqj/proxy/: test (200; 13.175286ms) Dec 17 13:33:24.394: INFO: (6) /api/v1/namespaces/proxy-3503/pods/http:proxy-service-g9928-qpvqj:1080/proxy/: ... (200; 13.672713ms) Dec 17 13:33:24.394: INFO: (6) /api/v1/namespaces/proxy-3503/services/proxy-service-g9928:portname1/proxy/: foo (200; 13.495491ms) Dec 17 13:33:24.395: INFO: (6) /api/v1/namespaces/proxy-3503/services/https:proxy-service-g9928:tlsportname2/proxy/: tls qux (200; 14.942395ms) Dec 17 13:33:24.395: INFO: (6) /api/v1/namespaces/proxy-3503/services/proxy-service-g9928:portname2/proxy/: bar (200; 14.852106ms) Dec 17 13:33:24.396: INFO: (6) /api/v1/namespaces/proxy-3503/pods/https:proxy-service-g9928-qpvqj:462/proxy/: tls qux (200; 16.046597ms) Dec 17 13:33:24.396: INFO: (6) /api/v1/namespaces/proxy-3503/pods/proxy-service-g9928-qpvqj:1080/proxy/: test<... (200; 16.25303ms) Dec 17 13:33:24.396: INFO: (6) /api/v1/namespaces/proxy-3503/pods/proxy-service-g9928-qpvqj:162/proxy/: bar (200; 16.472141ms) Dec 17 13:33:24.397: INFO: (6) /api/v1/namespaces/proxy-3503/services/http:proxy-service-g9928:portname2/proxy/: bar (200; 16.489561ms) Dec 17 13:33:24.397: INFO: (6) /api/v1/namespaces/proxy-3503/pods/proxy-service-g9928-qpvqj:160/proxy/: foo (200; 16.709124ms) Dec 17 13:33:24.398: INFO: (6) /api/v1/namespaces/proxy-3503/services/http:proxy-service-g9928:portname1/proxy/: foo (200; 17.710804ms) Dec 17 13:33:24.398: INFO: (6) /api/v1/namespaces/proxy-3503/pods/http:proxy-service-g9928-qpvqj:160/proxy/: foo (200; 17.650154ms) Dec 17 13:33:24.398: INFO: (6) /api/v1/namespaces/proxy-3503/pods/http:proxy-service-g9928-qpvqj:162/proxy/: bar (200; 17.748095ms) Dec 17 13:33:24.398: INFO: (6) /api/v1/namespaces/proxy-3503/pods/https:proxy-service-g9928-qpvqj:443/proxy/: test<... (200; 8.53737ms) Dec 17 13:33:24.408: INFO: (7) /api/v1/namespaces/proxy-3503/pods/proxy-service-g9928-qpvqj:160/proxy/: foo (200; 8.286851ms) Dec 17 13:33:24.409: INFO: (7) /api/v1/namespaces/proxy-3503/pods/http:proxy-service-g9928-qpvqj:1080/proxy/: ... (200; 8.638546ms) Dec 17 13:33:24.409: INFO: (7) /api/v1/namespaces/proxy-3503/pods/https:proxy-service-g9928-qpvqj:462/proxy/: tls qux (200; 8.821936ms) Dec 17 13:33:24.409: INFO: (7) /api/v1/namespaces/proxy-3503/pods/proxy-service-g9928-qpvqj/proxy/: test (200; 9.074692ms) Dec 17 13:33:24.428: INFO: (7) /api/v1/namespaces/proxy-3503/pods/http:proxy-service-g9928-qpvqj:162/proxy/: bar (200; 27.78558ms) Dec 17 13:33:24.428: INFO: (7) /api/v1/namespaces/proxy-3503/pods/https:proxy-service-g9928-qpvqj:443/proxy/: test (200; 6.505191ms) Dec 17 13:33:24.462: INFO: (8) /api/v1/namespaces/proxy-3503/pods/proxy-service-g9928-qpvqj:1080/proxy/: test<... (200; 6.617134ms) Dec 17 13:33:24.464: INFO: (8) /api/v1/namespaces/proxy-3503/pods/https:proxy-service-g9928-qpvqj:443/proxy/: ... (200; 20.83807ms) Dec 17 13:33:24.477: INFO: (8) /api/v1/namespaces/proxy-3503/services/https:proxy-service-g9928:tlsportname2/proxy/: tls qux (200; 21.364714ms) Dec 17 13:33:24.479: INFO: (8) /api/v1/namespaces/proxy-3503/pods/https:proxy-service-g9928-qpvqj:462/proxy/: tls qux (200; 23.26017ms) Dec 17 13:33:24.479: INFO: (8) /api/v1/namespaces/proxy-3503/pods/proxy-service-g9928-qpvqj:160/proxy/: foo (200; 23.314406ms) Dec 17 13:33:24.479: INFO: (8) /api/v1/namespaces/proxy-3503/services/proxy-service-g9928:portname1/proxy/: foo (200; 23.482141ms) Dec 17 13:33:24.479: INFO: (8) /api/v1/namespaces/proxy-3503/services/https:proxy-service-g9928:tlsportname1/proxy/: tls baz (200; 23.386365ms) Dec 17 13:33:24.479: INFO: (8) /api/v1/namespaces/proxy-3503/services/proxy-service-g9928:portname2/proxy/: bar (200; 23.643793ms) Dec 17 13:33:24.479: INFO: (8) /api/v1/namespaces/proxy-3503/services/http:proxy-service-g9928:portname1/proxy/: foo (200; 23.671296ms) Dec 17 13:33:24.497: INFO: (9) /api/v1/namespaces/proxy-3503/pods/proxy-service-g9928-qpvqj/proxy/: test (200; 17.501832ms) Dec 17 13:33:24.497: INFO: (9) /api/v1/namespaces/proxy-3503/pods/https:proxy-service-g9928-qpvqj:460/proxy/: tls baz (200; 17.448876ms) Dec 17 13:33:24.497: INFO: (9) /api/v1/namespaces/proxy-3503/pods/proxy-service-g9928-qpvqj:162/proxy/: bar (200; 17.544137ms) Dec 17 13:33:24.498: INFO: (9) /api/v1/namespaces/proxy-3503/services/proxy-service-g9928:portname1/proxy/: foo (200; 17.877701ms) Dec 17 13:33:24.499: INFO: (9) /api/v1/namespaces/proxy-3503/services/https:proxy-service-g9928:tlsportname1/proxy/: tls baz (200; 18.823705ms) Dec 17 13:33:24.499: INFO: (9) /api/v1/namespaces/proxy-3503/services/http:proxy-service-g9928:portname2/proxy/: bar (200; 18.892668ms) Dec 17 13:33:24.500: INFO: (9) /api/v1/namespaces/proxy-3503/pods/http:proxy-service-g9928-qpvqj:160/proxy/: foo (200; 19.467374ms) Dec 17 13:33:24.500: INFO: (9) /api/v1/namespaces/proxy-3503/pods/proxy-service-g9928-qpvqj:1080/proxy/: test<... (200; 19.610029ms) Dec 17 13:33:24.500: INFO: (9) /api/v1/namespaces/proxy-3503/services/http:proxy-service-g9928:portname1/proxy/: foo (200; 20.32642ms) Dec 17 13:33:24.500: INFO: (9) /api/v1/namespaces/proxy-3503/services/https:proxy-service-g9928:tlsportname2/proxy/: tls qux (200; 20.182212ms) Dec 17 13:33:24.501: INFO: (9) /api/v1/namespaces/proxy-3503/pods/http:proxy-service-g9928-qpvqj:162/proxy/: bar (200; 20.479073ms) Dec 17 13:33:24.501: INFO: (9) /api/v1/namespaces/proxy-3503/services/proxy-service-g9928:portname2/proxy/: bar (200; 20.732688ms) Dec 17 13:33:24.501: INFO: (9) /api/v1/namespaces/proxy-3503/pods/https:proxy-service-g9928-qpvqj:462/proxy/: tls qux (200; 21.066193ms) Dec 17 13:33:24.501: INFO: (9) /api/v1/namespaces/proxy-3503/pods/proxy-service-g9928-qpvqj:160/proxy/: foo (200; 21.298292ms) Dec 17 13:33:24.503: INFO: (9) /api/v1/namespaces/proxy-3503/pods/http:proxy-service-g9928-qpvqj:1080/proxy/: ... (200; 22.81292ms) Dec 17 13:33:24.503: INFO: (9) /api/v1/namespaces/proxy-3503/pods/https:proxy-service-g9928-qpvqj:443/proxy/: ... (200; 13.258974ms) Dec 17 13:33:24.517: INFO: (10) /api/v1/namespaces/proxy-3503/pods/https:proxy-service-g9928-qpvqj:460/proxy/: tls baz (200; 13.549572ms) Dec 17 13:33:24.518: INFO: (10) /api/v1/namespaces/proxy-3503/pods/proxy-service-g9928-qpvqj:160/proxy/: foo (200; 14.282901ms) Dec 17 13:33:24.518: INFO: (10) /api/v1/namespaces/proxy-3503/pods/proxy-service-g9928-qpvqj:1080/proxy/: test<... (200; 14.214034ms) Dec 17 13:33:24.518: INFO: (10) /api/v1/namespaces/proxy-3503/pods/proxy-service-g9928-qpvqj:162/proxy/: bar (200; 14.545491ms) Dec 17 13:33:24.519: INFO: (10) /api/v1/namespaces/proxy-3503/pods/proxy-service-g9928-qpvqj/proxy/: test (200; 14.89299ms) Dec 17 13:33:24.519: INFO: (10) /api/v1/namespaces/proxy-3503/services/https:proxy-service-g9928:tlsportname1/proxy/: tls baz (200; 14.893458ms) Dec 17 13:33:24.519: INFO: (10) /api/v1/namespaces/proxy-3503/pods/http:proxy-service-g9928-qpvqj:162/proxy/: bar (200; 15.694817ms) Dec 17 13:33:24.520: INFO: (10) /api/v1/namespaces/proxy-3503/services/proxy-service-g9928:portname2/proxy/: bar (200; 15.966629ms) Dec 17 13:33:24.520: INFO: (10) /api/v1/namespaces/proxy-3503/services/http:proxy-service-g9928:portname2/proxy/: bar (200; 15.700802ms) Dec 17 13:33:24.520: INFO: (10) /api/v1/namespaces/proxy-3503/pods/https:proxy-service-g9928-qpvqj:462/proxy/: tls qux (200; 15.997788ms) Dec 17 13:33:24.520: INFO: (10) /api/v1/namespaces/proxy-3503/services/proxy-service-g9928:portname1/proxy/: foo (200; 16.651154ms) Dec 17 13:33:24.521: INFO: (10) /api/v1/namespaces/proxy-3503/services/https:proxy-service-g9928:tlsportname2/proxy/: tls qux (200; 17.814772ms) Dec 17 13:33:24.522: INFO: (10) /api/v1/namespaces/proxy-3503/services/http:proxy-service-g9928:portname1/proxy/: foo (200; 17.76361ms) Dec 17 13:33:24.541: INFO: (11) /api/v1/namespaces/proxy-3503/pods/http:proxy-service-g9928-qpvqj:1080/proxy/: ... (200; 18.814756ms) Dec 17 13:33:24.541: INFO: (11) /api/v1/namespaces/proxy-3503/services/http:proxy-service-g9928:portname1/proxy/: foo (200; 19.080821ms) Dec 17 13:33:24.543: INFO: (11) /api/v1/namespaces/proxy-3503/pods/https:proxy-service-g9928-qpvqj:462/proxy/: tls qux (200; 20.123906ms) Dec 17 13:33:24.552: INFO: (11) /api/v1/namespaces/proxy-3503/pods/proxy-service-g9928-qpvqj:1080/proxy/: test<... (200; 29.151915ms) Dec 17 13:33:24.552: INFO: (11) /api/v1/namespaces/proxy-3503/pods/proxy-service-g9928-qpvqj:162/proxy/: bar (200; 29.418813ms) Dec 17 13:33:24.553: INFO: (11) /api/v1/namespaces/proxy-3503/services/proxy-service-g9928:portname1/proxy/: foo (200; 30.35725ms) Dec 17 13:33:24.553: INFO: (11) /api/v1/namespaces/proxy-3503/pods/http:proxy-service-g9928-qpvqj:162/proxy/: bar (200; 30.591781ms) Dec 17 13:33:24.553: INFO: (11) /api/v1/namespaces/proxy-3503/pods/https:proxy-service-g9928-qpvqj:460/proxy/: tls baz (200; 30.572625ms) Dec 17 13:33:24.553: INFO: (11) /api/v1/namespaces/proxy-3503/pods/http:proxy-service-g9928-qpvqj:160/proxy/: foo (200; 30.715465ms) Dec 17 13:33:24.553: INFO: (11) /api/v1/namespaces/proxy-3503/pods/https:proxy-service-g9928-qpvqj:443/proxy/: test (200; 32.890135ms) Dec 17 13:33:24.572: INFO: (12) /api/v1/namespaces/proxy-3503/pods/proxy-service-g9928-qpvqj:162/proxy/: bar (200; 16.254862ms) Dec 17 13:33:24.573: INFO: (12) /api/v1/namespaces/proxy-3503/pods/http:proxy-service-g9928-qpvqj:1080/proxy/: ... (200; 17.237813ms) Dec 17 13:33:24.573: INFO: (12) /api/v1/namespaces/proxy-3503/pods/http:proxy-service-g9928-qpvqj:160/proxy/: foo (200; 17.405528ms) Dec 17 13:33:24.574: INFO: (12) /api/v1/namespaces/proxy-3503/pods/proxy-service-g9928-qpvqj:1080/proxy/: test<... (200; 17.796893ms) Dec 17 13:33:24.574: INFO: (12) /api/v1/namespaces/proxy-3503/pods/proxy-service-g9928-qpvqj:160/proxy/: foo (200; 18.030038ms) Dec 17 13:33:24.574: INFO: (12) /api/v1/namespaces/proxy-3503/services/https:proxy-service-g9928:tlsportname1/proxy/: tls baz (200; 17.806792ms) Dec 17 13:33:24.574: INFO: (12) /api/v1/namespaces/proxy-3503/services/http:proxy-service-g9928:portname2/proxy/: bar (200; 18.187239ms) Dec 17 13:33:24.574: INFO: (12) /api/v1/namespaces/proxy-3503/pods/http:proxy-service-g9928-qpvqj:162/proxy/: bar (200; 17.834054ms) Dec 17 13:33:24.574: INFO: (12) /api/v1/namespaces/proxy-3503/pods/https:proxy-service-g9928-qpvqj:460/proxy/: tls baz (200; 17.977237ms) Dec 17 13:33:24.574: INFO: (12) /api/v1/namespaces/proxy-3503/pods/https:proxy-service-g9928-qpvqj:443/proxy/: test (200; 17.885468ms) Dec 17 13:33:24.579: INFO: (12) /api/v1/namespaces/proxy-3503/pods/https:proxy-service-g9928-qpvqj:462/proxy/: tls qux (200; 22.993071ms) Dec 17 13:33:24.581: INFO: (12) /api/v1/namespaces/proxy-3503/services/https:proxy-service-g9928:tlsportname2/proxy/: tls qux (200; 25.231113ms) Dec 17 13:33:24.608: INFO: (12) /api/v1/namespaces/proxy-3503/services/proxy-service-g9928:portname2/proxy/: bar (200; 52.322102ms) Dec 17 13:33:24.608: INFO: (12) /api/v1/namespaces/proxy-3503/services/http:proxy-service-g9928:portname1/proxy/: foo (200; 52.361352ms) Dec 17 13:33:24.612: INFO: (12) /api/v1/namespaces/proxy-3503/services/proxy-service-g9928:portname1/proxy/: foo (200; 56.311074ms) Dec 17 13:33:24.677: INFO: (13) /api/v1/namespaces/proxy-3503/pods/http:proxy-service-g9928-qpvqj:162/proxy/: bar (200; 63.179927ms) Dec 17 13:33:24.687: INFO: (13) /api/v1/namespaces/proxy-3503/pods/https:proxy-service-g9928-qpvqj:443/proxy/: test<... (200; 73.685327ms) Dec 17 13:33:24.687: INFO: (13) /api/v1/namespaces/proxy-3503/pods/proxy-service-g9928-qpvqj:160/proxy/: foo (200; 73.330802ms) Dec 17 13:33:24.687: INFO: (13) /api/v1/namespaces/proxy-3503/pods/https:proxy-service-g9928-qpvqj:462/proxy/: tls qux (200; 73.183836ms) Dec 17 13:33:24.687: INFO: (13) /api/v1/namespaces/proxy-3503/pods/proxy-service-g9928-qpvqj:162/proxy/: bar (200; 73.686981ms) Dec 17 13:33:24.690: INFO: (13) /api/v1/namespaces/proxy-3503/services/proxy-service-g9928:portname2/proxy/: bar (200; 76.462541ms) Dec 17 13:33:24.690: INFO: (13) /api/v1/namespaces/proxy-3503/services/http:proxy-service-g9928:portname2/proxy/: bar (200; 76.694272ms) Dec 17 13:33:24.691: INFO: (13) /api/v1/namespaces/proxy-3503/services/https:proxy-service-g9928:tlsportname2/proxy/: tls qux (200; 77.729147ms) Dec 17 13:33:24.692: INFO: (13) /api/v1/namespaces/proxy-3503/pods/http:proxy-service-g9928-qpvqj:160/proxy/: foo (200; 78.372698ms) Dec 17 13:33:24.692: INFO: (13) /api/v1/namespaces/proxy-3503/pods/http:proxy-service-g9928-qpvqj:1080/proxy/: ... (200; 78.746159ms) Dec 17 13:33:24.693: INFO: (13) /api/v1/namespaces/proxy-3503/services/proxy-service-g9928:portname1/proxy/: foo (200; 78.733024ms) Dec 17 13:33:24.698: INFO: (13) /api/v1/namespaces/proxy-3503/services/https:proxy-service-g9928:tlsportname1/proxy/: tls baz (200; 84.651847ms) Dec 17 13:33:24.698: INFO: (13) /api/v1/namespaces/proxy-3503/services/http:proxy-service-g9928:portname1/proxy/: foo (200; 84.460723ms) Dec 17 13:33:24.698: INFO: (13) /api/v1/namespaces/proxy-3503/pods/https:proxy-service-g9928-qpvqj:460/proxy/: tls baz (200; 85.542467ms) Dec 17 13:33:24.699: INFO: (13) /api/v1/namespaces/proxy-3503/pods/proxy-service-g9928-qpvqj/proxy/: test (200; 84.871566ms) Dec 17 13:33:24.723: INFO: (14) /api/v1/namespaces/proxy-3503/pods/proxy-service-g9928-qpvqj:1080/proxy/: test<... (200; 24.544139ms) Dec 17 13:33:24.723: INFO: (14) /api/v1/namespaces/proxy-3503/pods/proxy-service-g9928-qpvqj:160/proxy/: foo (200; 24.126529ms) Dec 17 13:33:24.727: INFO: (14) /api/v1/namespaces/proxy-3503/services/https:proxy-service-g9928:tlsportname2/proxy/: tls qux (200; 28.116007ms) Dec 17 13:33:24.730: INFO: (14) /api/v1/namespaces/proxy-3503/services/http:proxy-service-g9928:portname2/proxy/: bar (200; 30.577813ms) Dec 17 13:33:24.730: INFO: (14) /api/v1/namespaces/proxy-3503/services/proxy-service-g9928:portname1/proxy/: foo (200; 31.054981ms) Dec 17 13:33:24.730: INFO: (14) /api/v1/namespaces/proxy-3503/services/http:proxy-service-g9928:portname1/proxy/: foo (200; 30.595938ms) Dec 17 13:33:24.731: INFO: (14) /api/v1/namespaces/proxy-3503/pods/https:proxy-service-g9928-qpvqj:462/proxy/: tls qux (200; 32.027449ms) Dec 17 13:33:24.731: INFO: (14) /api/v1/namespaces/proxy-3503/pods/http:proxy-service-g9928-qpvqj:1080/proxy/: ... (200; 32.082139ms) Dec 17 13:33:24.732: INFO: (14) /api/v1/namespaces/proxy-3503/services/proxy-service-g9928:portname2/proxy/: bar (200; 32.405708ms) Dec 17 13:33:24.732: INFO: (14) /api/v1/namespaces/proxy-3503/pods/https:proxy-service-g9928-qpvqj:443/proxy/: test (200; 37.158182ms) Dec 17 13:33:24.776: INFO: (15) /api/v1/namespaces/proxy-3503/pods/proxy-service-g9928-qpvqj:162/proxy/: bar (200; 39.01018ms) Dec 17 13:33:24.776: INFO: (15) /api/v1/namespaces/proxy-3503/pods/https:proxy-service-g9928-qpvqj:462/proxy/: tls qux (200; 38.98357ms) Dec 17 13:33:24.776: INFO: (15) /api/v1/namespaces/proxy-3503/services/https:proxy-service-g9928:tlsportname2/proxy/: tls qux (200; 39.348704ms) Dec 17 13:33:24.776: INFO: (15) /api/v1/namespaces/proxy-3503/pods/http:proxy-service-g9928-qpvqj:1080/proxy/: ... (200; 39.343968ms) Dec 17 13:33:24.776: INFO: (15) /api/v1/namespaces/proxy-3503/pods/http:proxy-service-g9928-qpvqj:160/proxy/: foo (200; 38.922408ms) Dec 17 13:33:24.778: INFO: (15) /api/v1/namespaces/proxy-3503/pods/proxy-service-g9928-qpvqj/proxy/: test (200; 40.923382ms) Dec 17 13:33:24.778: INFO: (15) /api/v1/namespaces/proxy-3503/pods/https:proxy-service-g9928-qpvqj:443/proxy/: test<... (200; 45.155457ms) Dec 17 13:33:24.801: INFO: (16) /api/v1/namespaces/proxy-3503/pods/proxy-service-g9928-qpvqj:160/proxy/: foo (200; 18.697055ms) Dec 17 13:33:24.803: INFO: (16) /api/v1/namespaces/proxy-3503/pods/proxy-service-g9928-qpvqj/proxy/: test (200; 20.151764ms) Dec 17 13:33:24.804: INFO: (16) /api/v1/namespaces/proxy-3503/pods/http:proxy-service-g9928-qpvqj:162/proxy/: bar (200; 21.596279ms) Dec 17 13:33:24.805: INFO: (16) /api/v1/namespaces/proxy-3503/services/http:proxy-service-g9928:portname1/proxy/: foo (200; 21.819323ms) Dec 17 13:33:24.805: INFO: (16) /api/v1/namespaces/proxy-3503/pods/proxy-service-g9928-qpvqj:1080/proxy/: test<... (200; 21.824088ms) Dec 17 13:33:24.805: INFO: (16) /api/v1/namespaces/proxy-3503/pods/http:proxy-service-g9928-qpvqj:1080/proxy/: ... (200; 22.838849ms) Dec 17 13:33:24.805: INFO: (16) /api/v1/namespaces/proxy-3503/pods/http:proxy-service-g9928-qpvqj:160/proxy/: foo (200; 22.63607ms) Dec 17 13:33:24.807: INFO: (16) /api/v1/namespaces/proxy-3503/pods/https:proxy-service-g9928-qpvqj:460/proxy/: tls baz (200; 24.063155ms) Dec 17 13:33:24.807: INFO: (16) /api/v1/namespaces/proxy-3503/services/https:proxy-service-g9928:tlsportname1/proxy/: tls baz (200; 24.223214ms) Dec 17 13:33:24.807: INFO: (16) /api/v1/namespaces/proxy-3503/services/proxy-service-g9928:portname1/proxy/: foo (200; 24.382707ms) Dec 17 13:33:24.808: INFO: (16) /api/v1/namespaces/proxy-3503/services/http:proxy-service-g9928:portname2/proxy/: bar (200; 25.869995ms) Dec 17 13:33:24.809: INFO: (16) /api/v1/namespaces/proxy-3503/services/proxy-service-g9928:portname2/proxy/: bar (200; 25.936153ms) Dec 17 13:33:24.810: INFO: (16) /api/v1/namespaces/proxy-3503/pods/https:proxy-service-g9928-qpvqj:443/proxy/: ... (200; 16.950823ms) Dec 17 13:33:24.832: INFO: (17) /api/v1/namespaces/proxy-3503/pods/proxy-service-g9928-qpvqj:160/proxy/: foo (200; 18.942431ms) Dec 17 13:33:24.832: INFO: (17) /api/v1/namespaces/proxy-3503/pods/https:proxy-service-g9928-qpvqj:462/proxy/: tls qux (200; 18.687464ms) Dec 17 13:33:24.832: INFO: (17) /api/v1/namespaces/proxy-3503/pods/http:proxy-service-g9928-qpvqj:162/proxy/: bar (200; 19.612043ms) Dec 17 13:33:24.833: INFO: (17) /api/v1/namespaces/proxy-3503/pods/proxy-service-g9928-qpvqj/proxy/: test (200; 19.90053ms) Dec 17 13:33:24.834: INFO: (17) /api/v1/namespaces/proxy-3503/pods/https:proxy-service-g9928-qpvqj:460/proxy/: tls baz (200; 21.018997ms) Dec 17 13:33:24.835: INFO: (17) /api/v1/namespaces/proxy-3503/pods/http:proxy-service-g9928-qpvqj:160/proxy/: foo (200; 21.793571ms) Dec 17 13:33:24.835: INFO: (17) /api/v1/namespaces/proxy-3503/pods/proxy-service-g9928-qpvqj:1080/proxy/: test<... (200; 22.406699ms) Dec 17 13:33:24.841: INFO: (17) /api/v1/namespaces/proxy-3503/services/http:proxy-service-g9928:portname1/proxy/: foo (200; 28.026665ms) Dec 17 13:33:24.844: INFO: (17) /api/v1/namespaces/proxy-3503/services/http:proxy-service-g9928:portname2/proxy/: bar (200; 30.452418ms) Dec 17 13:33:24.845: INFO: (17) /api/v1/namespaces/proxy-3503/services/proxy-service-g9928:portname2/proxy/: bar (200; 31.648315ms) Dec 17 13:33:24.845: INFO: (17) /api/v1/namespaces/proxy-3503/services/https:proxy-service-g9928:tlsportname1/proxy/: tls baz (200; 32.039332ms) Dec 17 13:33:24.846: INFO: (17) /api/v1/namespaces/proxy-3503/services/proxy-service-g9928:portname1/proxy/: foo (200; 32.489324ms) Dec 17 13:33:24.846: INFO: (17) /api/v1/namespaces/proxy-3503/services/https:proxy-service-g9928:tlsportname2/proxy/: tls qux (200; 32.786674ms) Dec 17 13:33:24.865: INFO: (18) /api/v1/namespaces/proxy-3503/pods/proxy-service-g9928-qpvqj:162/proxy/: bar (200; 18.017537ms) Dec 17 13:33:24.865: INFO: (18) /api/v1/namespaces/proxy-3503/pods/proxy-service-g9928-qpvqj/proxy/: test (200; 18.063921ms) Dec 17 13:33:24.865: INFO: (18) /api/v1/namespaces/proxy-3503/pods/proxy-service-g9928-qpvqj:1080/proxy/: test<... (200; 17.992448ms) Dec 17 13:33:24.865: INFO: (18) /api/v1/namespaces/proxy-3503/services/proxy-service-g9928:portname1/proxy/: foo (200; 17.879571ms) Dec 17 13:33:24.865: INFO: (18) /api/v1/namespaces/proxy-3503/services/http:proxy-service-g9928:portname1/proxy/: foo (200; 18.329159ms) Dec 17 13:33:24.865: INFO: (18) /api/v1/namespaces/proxy-3503/pods/http:proxy-service-g9928-qpvqj:162/proxy/: bar (200; 18.462025ms) Dec 17 13:33:24.867: INFO: (18) /api/v1/namespaces/proxy-3503/services/https:proxy-service-g9928:tlsportname1/proxy/: tls baz (200; 20.814645ms) Dec 17 13:33:24.868: INFO: (18) /api/v1/namespaces/proxy-3503/pods/https:proxy-service-g9928-qpvqj:460/proxy/: tls baz (200; 21.362561ms) Dec 17 13:33:24.868: INFO: (18) /api/v1/namespaces/proxy-3503/pods/proxy-service-g9928-qpvqj:160/proxy/: foo (200; 20.745062ms) Dec 17 13:33:24.868: INFO: (18) /api/v1/namespaces/proxy-3503/services/proxy-service-g9928:portname2/proxy/: bar (200; 20.819774ms) Dec 17 13:33:24.869: INFO: (18) /api/v1/namespaces/proxy-3503/pods/http:proxy-service-g9928-qpvqj:160/proxy/: foo (200; 21.511028ms) Dec 17 13:33:24.870: INFO: (18) /api/v1/namespaces/proxy-3503/pods/https:proxy-service-g9928-qpvqj:462/proxy/: tls qux (200; 22.742576ms) Dec 17 13:33:24.870: INFO: (18) /api/v1/namespaces/proxy-3503/pods/http:proxy-service-g9928-qpvqj:1080/proxy/: ... (200; 22.996024ms) Dec 17 13:33:24.871: INFO: (18) /api/v1/namespaces/proxy-3503/pods/https:proxy-service-g9928-qpvqj:443/proxy/: test (200; 20.850279ms) Dec 17 13:33:24.895: INFO: (19) /api/v1/namespaces/proxy-3503/pods/proxy-service-g9928-qpvqj:162/proxy/: bar (200; 20.608622ms) Dec 17 13:33:24.895: INFO: (19) /api/v1/namespaces/proxy-3503/pods/proxy-service-g9928-qpvqj:1080/proxy/: test<... (200; 20.860266ms) Dec 17 13:33:24.896: INFO: (19) /api/v1/namespaces/proxy-3503/pods/http:proxy-service-g9928-qpvqj:160/proxy/: foo (200; 21.945228ms) Dec 17 13:33:24.897: INFO: (19) /api/v1/namespaces/proxy-3503/pods/http:proxy-service-g9928-qpvqj:1080/proxy/: ... (200; 22.990674ms) Dec 17 13:33:24.898: INFO: (19) /api/v1/namespaces/proxy-3503/pods/proxy-service-g9928-qpvqj:160/proxy/: foo (200; 23.75305ms) Dec 17 13:33:24.898: INFO: (19) /api/v1/namespaces/proxy-3503/pods/https:proxy-service-g9928-qpvqj:460/proxy/: tls baz (200; 24.012991ms) Dec 17 13:33:24.898: INFO: (19) /api/v1/namespaces/proxy-3503/services/http:proxy-service-g9928:portname2/proxy/: bar (200; 24.120915ms) Dec 17 13:33:24.899: INFO: (19) /api/v1/namespaces/proxy-3503/pods/https:proxy-service-g9928-qpvqj:462/proxy/: tls qux (200; 24.430682ms) Dec 17 13:33:24.900: INFO: (19) /api/v1/namespaces/proxy-3503/pods/http:proxy-service-g9928-qpvqj:162/proxy/: bar (200; 25.301849ms) Dec 17 13:33:24.901: INFO: (19) /api/v1/namespaces/proxy-3503/services/https:proxy-service-g9928:tlsportname1/proxy/: tls baz (200; 26.138094ms) Dec 17 13:33:24.902: INFO: (19) /api/v1/namespaces/proxy-3503/services/http:proxy-service-g9928:portname1/proxy/: foo (200; 27.986362ms) Dec 17 13:33:24.903: INFO: (19) /api/v1/namespaces/proxy-3503/services/proxy-service-g9928:portname1/proxy/: foo (200; 28.559231ms) Dec 17 13:33:24.903: INFO: (19) /api/v1/namespaces/proxy-3503/services/proxy-service-g9928:portname2/proxy/: bar (200; 28.816062ms) Dec 17 13:33:24.905: INFO: (19) /api/v1/namespaces/proxy-3503/pods/https:proxy-service-g9928-qpvqj:443/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-bba41be8-d540-414c-9822-5da3f0bd5405 STEP: Creating a pod to test consume configMaps Dec 17 13:33:42.829: INFO: Waiting up to 5m0s for pod "pod-configmaps-ec139d51-6602-49f2-a856-a387d6a60c81" in namespace "configmap-636" to be "success or failure" Dec 17 13:33:42.909: INFO: Pod "pod-configmaps-ec139d51-6602-49f2-a856-a387d6a60c81": Phase="Pending", Reason="", readiness=false. Elapsed: 79.833384ms Dec 17 13:33:44.924: INFO: Pod "pod-configmaps-ec139d51-6602-49f2-a856-a387d6a60c81": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094806457s Dec 17 13:33:46.935: INFO: Pod "pod-configmaps-ec139d51-6602-49f2-a856-a387d6a60c81": Phase="Pending", Reason="", readiness=false. Elapsed: 4.1056567s Dec 17 13:33:48.950: INFO: Pod "pod-configmaps-ec139d51-6602-49f2-a856-a387d6a60c81": Phase="Pending", Reason="", readiness=false. Elapsed: 6.120511329s Dec 17 13:33:50.970: INFO: Pod "pod-configmaps-ec139d51-6602-49f2-a856-a387d6a60c81": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.140530546s STEP: Saw pod success Dec 17 13:33:50.970: INFO: Pod "pod-configmaps-ec139d51-6602-49f2-a856-a387d6a60c81" satisfied condition "success or failure" Dec 17 13:33:50.980: INFO: Trying to get logs from node iruya-node pod pod-configmaps-ec139d51-6602-49f2-a856-a387d6a60c81 container configmap-volume-test: STEP: delete the pod Dec 17 13:33:51.104: INFO: Waiting for pod pod-configmaps-ec139d51-6602-49f2-a856-a387d6a60c81 to disappear Dec 17 13:33:51.114: INFO: Pod pod-configmaps-ec139d51-6602-49f2-a856-a387d6a60c81 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 13:33:51.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-636" for this suite. Dec 17 13:33:57.144: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 13:33:57.291: INFO: namespace configmap-636 deletion completed in 6.167151525s • [SLOW TEST:14.557 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 13:33:57.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292 STEP: creating an rc Dec 17 13:33:57.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1372' Dec 17 13:33:57.824: INFO: stderr: "" Dec 17 13:33:57.824: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Waiting for Redis master to start. Dec 17 13:33:58.844: INFO: Selector matched 1 pods for map[app:redis] Dec 17 13:33:58.844: INFO: Found 0 / 1 Dec 17 13:33:59.841: INFO: Selector matched 1 pods for map[app:redis] Dec 17 13:33:59.842: INFO: Found 0 / 1 Dec 17 13:34:00.837: INFO: Selector matched 1 pods for map[app:redis] Dec 17 13:34:00.837: INFO: Found 0 / 1 Dec 17 13:34:01.868: INFO: Selector matched 1 pods for map[app:redis] Dec 17 13:34:01.868: INFO: Found 0 / 1 Dec 17 13:34:02.842: INFO: Selector matched 1 pods for map[app:redis] Dec 17 13:34:02.842: INFO: Found 0 / 1 Dec 17 13:34:03.837: INFO: Selector matched 1 pods for map[app:redis] Dec 17 13:34:03.838: INFO: Found 0 / 1 Dec 17 13:34:04.833: INFO: Selector matched 1 pods for map[app:redis] Dec 17 13:34:04.833: INFO: Found 0 / 1 Dec 17 13:34:05.838: INFO: Selector matched 1 pods for map[app:redis] Dec 17 13:34:05.839: INFO: Found 0 / 1 Dec 17 13:34:06.837: INFO: Selector matched 1 pods for map[app:redis] Dec 17 13:34:06.837: INFO: Found 1 / 1 Dec 17 13:34:06.837: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Dec 17 13:34:06.842: INFO: Selector matched 1 pods for map[app:redis] Dec 17 13:34:06.842: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Dec 17 13:34:06.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-9hq7b redis-master --namespace=kubectl-1372' Dec 17 13:34:07.064: INFO: stderr: "" Dec 17 13:34:07.064: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 17 Dec 13:34:05.474 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 17 Dec 13:34:05.474 # Server started, Redis version 3.2.12\n1:M 17 Dec 13:34:05.475 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 17 Dec 13:34:05.475 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Dec 17 13:34:07.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-9hq7b redis-master --namespace=kubectl-1372 --tail=1' Dec 17 13:34:07.219: INFO: stderr: "" Dec 17 13:34:07.219: INFO: stdout: "1:M 17 Dec 13:34:05.475 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Dec 17 13:34:07.219: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-9hq7b redis-master --namespace=kubectl-1372 --limit-bytes=1' Dec 17 13:34:07.386: INFO: stderr: "" Dec 17 13:34:07.387: INFO: stdout: " " STEP: exposing timestamps Dec 17 13:34:07.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-9hq7b redis-master --namespace=kubectl-1372 --tail=1 --timestamps' Dec 17 13:34:07.586: INFO: stderr: "" Dec 17 13:34:07.587: INFO: stdout: "2019-12-17T13:34:05.476178293Z 1:M 17 Dec 13:34:05.475 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Dec 17 13:34:10.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-9hq7b redis-master --namespace=kubectl-1372 --since=1s' Dec 17 13:34:10.308: INFO: stderr: "" Dec 17 13:34:10.308: INFO: stdout: "" Dec 17 13:34:10.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-9hq7b redis-master --namespace=kubectl-1372 --since=24h' Dec 17 13:34:10.525: INFO: stderr: "" Dec 17 13:34:10.526: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 17 Dec 13:34:05.474 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 17 Dec 13:34:05.474 # Server started, Redis version 3.2.12\n1:M 17 Dec 13:34:05.475 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 17 Dec 13:34:05.475 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 STEP: using delete to clean up resources Dec 17 13:34:10.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1372' Dec 17 13:34:10.669: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 17 13:34:10.669: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Dec 17 13:34:10.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-1372' Dec 17 13:34:10.811: INFO: stderr: "No resources found.\n" Dec 17 13:34:10.811: INFO: stdout: "" Dec 17 13:34:10.811: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-1372 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Dec 17 13:34:11.014: INFO: stderr: "" Dec 17 13:34:11.015: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 13:34:11.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1372" for this suite. Dec 17 13:34:33.096: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 13:34:33.256: INFO: namespace kubectl-1372 deletion completed in 22.23207272s • [SLOW TEST:35.965 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 13:34:33.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-zqtp STEP: Creating a pod to test atomic-volume-subpath Dec 17 13:34:33.419: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-zqtp" in namespace "subpath-2170" to be "success or failure" Dec 17 13:34:33.468: INFO: Pod "pod-subpath-test-configmap-zqtp": Phase="Pending", Reason="", readiness=false. Elapsed: 48.095133ms Dec 17 13:34:35.478: INFO: Pod "pod-subpath-test-configmap-zqtp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058077469s Dec 17 13:34:37.487: INFO: Pod "pod-subpath-test-configmap-zqtp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066942703s Dec 17 13:34:39.497: INFO: Pod "pod-subpath-test-configmap-zqtp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.076633922s Dec 17 13:34:41.507: INFO: Pod "pod-subpath-test-configmap-zqtp": Phase="Pending", Reason="", readiness=false. Elapsed: 8.087470611s Dec 17 13:34:43.520: INFO: Pod "pod-subpath-test-configmap-zqtp": Phase="Running", Reason="", readiness=true. Elapsed: 10.099733863s Dec 17 13:34:45.536: INFO: Pod "pod-subpath-test-configmap-zqtp": Phase="Running", Reason="", readiness=true. Elapsed: 12.11575572s Dec 17 13:34:47.547: INFO: Pod "pod-subpath-test-configmap-zqtp": Phase="Running", Reason="", readiness=true. Elapsed: 14.126578529s Dec 17 13:34:49.558: INFO: Pod "pod-subpath-test-configmap-zqtp": Phase="Running", Reason="", readiness=true. Elapsed: 16.137755499s Dec 17 13:34:51.569: INFO: Pod "pod-subpath-test-configmap-zqtp": Phase="Running", Reason="", readiness=true. Elapsed: 18.149358862s Dec 17 13:34:53.580: INFO: Pod "pod-subpath-test-configmap-zqtp": Phase="Running", Reason="", readiness=true. Elapsed: 20.160021729s Dec 17 13:34:55.588: INFO: Pod "pod-subpath-test-configmap-zqtp": Phase="Running", Reason="", readiness=true. Elapsed: 22.168418984s Dec 17 13:34:57.605: INFO: Pod "pod-subpath-test-configmap-zqtp": Phase="Running", Reason="", readiness=true. Elapsed: 24.185429978s Dec 17 13:34:59.616: INFO: Pod "pod-subpath-test-configmap-zqtp": Phase="Running", Reason="", readiness=true. Elapsed: 26.196300414s Dec 17 13:35:01.627: INFO: Pod "pod-subpath-test-configmap-zqtp": Phase="Running", Reason="", readiness=true. Elapsed: 28.207083443s Dec 17 13:35:03.642: INFO: Pod "pod-subpath-test-configmap-zqtp": Phase="Running", Reason="", readiness=true. Elapsed: 30.221742535s Dec 17 13:35:05.652: INFO: Pod "pod-subpath-test-configmap-zqtp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.232475221s STEP: Saw pod success Dec 17 13:35:05.653: INFO: Pod "pod-subpath-test-configmap-zqtp" satisfied condition "success or failure" Dec 17 13:35:05.660: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-zqtp container test-container-subpath-configmap-zqtp: STEP: delete the pod Dec 17 13:35:05.714: INFO: Waiting for pod pod-subpath-test-configmap-zqtp to disappear Dec 17 13:35:05.724: INFO: Pod pod-subpath-test-configmap-zqtp no longer exists STEP: Deleting pod pod-subpath-test-configmap-zqtp Dec 17 13:35:05.724: INFO: Deleting pod "pod-subpath-test-configmap-zqtp" in namespace "subpath-2170" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 13:35:05.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2170" for this suite. Dec 17 13:35:11.849: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 13:35:11.950: INFO: namespace subpath-2170 deletion completed in 6.136360449s • [SLOW TEST:38.692 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 13:35:11.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 17 13:35:12.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9802' Dec 17 13:35:12.780: INFO: stderr: "" Dec 17 13:35:12.780: INFO: stdout: "replicationcontroller/redis-master created\n" Dec 17 13:35:12.781: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9802' Dec 17 13:35:13.556: INFO: stderr: "" Dec 17 13:35:13.556: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Dec 17 13:35:14.576: INFO: Selector matched 1 pods for map[app:redis] Dec 17 13:35:14.576: INFO: Found 0 / 1 Dec 17 13:35:15.590: INFO: Selector matched 1 pods for map[app:redis] Dec 17 13:35:15.590: INFO: Found 0 / 1 Dec 17 13:35:16.576: INFO: Selector matched 1 pods for map[app:redis] Dec 17 13:35:16.577: INFO: Found 0 / 1 Dec 17 13:35:17.594: INFO: Selector matched 1 pods for map[app:redis] Dec 17 13:35:17.595: INFO: Found 0 / 1 Dec 17 13:35:18.576: INFO: Selector matched 1 pods for map[app:redis] Dec 17 13:35:18.576: INFO: Found 0 / 1 Dec 17 13:35:19.570: INFO: Selector matched 1 pods for map[app:redis] Dec 17 13:35:19.570: INFO: Found 0 / 1 Dec 17 13:35:20.573: INFO: Selector matched 1 pods for map[app:redis] Dec 17 13:35:20.574: INFO: Found 0 / 1 Dec 17 13:35:21.567: INFO: Selector matched 1 pods for map[app:redis] Dec 17 13:35:21.567: INFO: Found 1 / 1 Dec 17 13:35:21.567: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Dec 17 13:35:21.572: INFO: Selector matched 1 pods for map[app:redis] Dec 17 13:35:21.572: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Dec 17 13:35:21.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-n7cfx --namespace=kubectl-9802' Dec 17 13:35:21.839: INFO: stderr: "" Dec 17 13:35:21.840: INFO: stdout: "Name: redis-master-n7cfx\nNamespace: kubectl-9802\nPriority: 0\nNode: iruya-node/10.96.3.65\nStart Time: Tue, 17 Dec 2019 13:35:12 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.44.0.1\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: docker://b5383b1a4d2316270a65191ea5aed01312491d6e93aa77fa5ee07b421a8ab201\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Tue, 17 Dec 2019 13:35:20 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-tw8v5 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-tw8v5:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-tw8v5\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 9s default-scheduler Successfully assigned kubectl-9802/redis-master-n7cfx to iruya-node\n Normal Pulled 4s kubelet, iruya-node Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 2s kubelet, iruya-node Created container redis-master\n Normal Started 1s kubelet, iruya-node Started container redis-master\n" Dec 17 13:35:21.840: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-9802' Dec 17 13:35:21.998: INFO: stderr: "" Dec 17 13:35:21.998: INFO: stdout: "Name: redis-master\nNamespace: kubectl-9802\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 9s replication-controller Created pod: redis-master-n7cfx\n" Dec 17 13:35:21.998: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-9802' Dec 17 13:35:22.166: INFO: stderr: "" Dec 17 13:35:22.167: INFO: stdout: "Name: redis-master\nNamespace: kubectl-9802\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.100.216.33\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.44.0.1:6379\nSession Affinity: None\nEvents: \n" Dec 17 13:35:22.176: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-node' Dec 17 13:35:22.400: INFO: stderr: "" Dec 17 13:35:22.400: INFO: stdout: "Name: iruya-node\nRoles: \nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=iruya-node\n kubernetes.io/os=linux\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 04 Aug 2019 09:01:39 +0000\nTaints: \nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n NetworkUnavailable False Sat, 12 Oct 2019 11:56:49 +0000 Sat, 12 Oct 2019 11:56:49 +0000 WeaveIsUp Weave pod has set this\n MemoryPressure False Tue, 17 Dec 2019 13:35:13 +0000 Sun, 04 Aug 2019 09:01:39 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Tue, 17 Dec 2019 13:35:13 +0000 Sun, 04 Aug 2019 09:01:39 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Tue, 17 Dec 2019 13:35:13 +0000 Sun, 04 Aug 2019 09:01:39 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Tue, 17 Dec 2019 13:35:13 +0000 Sun, 04 Aug 2019 09:02:19 +0000 KubeletReady kubelet is posting ready status. AppArmor enabled\nAddresses:\n InternalIP: 10.96.3.65\n Hostname: iruya-node\nCapacity:\n cpu: 4\n ephemeral-storage: 20145724Ki\n hugepages-2Mi: 0\n memory: 4039076Ki\n pods: 110\nAllocatable:\n cpu: 4\n ephemeral-storage: 18566299208\n hugepages-2Mi: 0\n memory: 3936676Ki\n pods: 110\nSystem Info:\n Machine ID: f573dcf04d6f4a87856a35d266a2fa7a\n System UUID: F573DCF0-4D6F-4A87-856A-35D266A2FA7A\n Boot ID: 8baf4beb-8391-43e6-b17b-b1e184b5370a\n Kernel Version: 4.15.0-52-generic\n OS Image: Ubuntu 18.04.2 LTS\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: docker://18.9.7\n Kubelet Version: v1.15.1\n Kube-Proxy Version: v1.15.1\nPodCIDR: 10.96.1.0/24\nNon-terminated Pods: (3 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system kube-proxy-976zl 0 (0%) 0 (0%) 0 (0%) 0 (0%) 135d\n kube-system weave-net-rlp57 20m (0%) 0 (0%) 0 (0%) 0 (0%) 66d\n kubectl-9802 redis-master-n7cfx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 10s\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 20m (0%) 0 (0%)\n memory 0 (0%) 0 (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Dec 17 13:35:22.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-9802' Dec 17 13:35:22.522: INFO: stderr: "" Dec 17 13:35:22.522: INFO: stdout: "Name: kubectl-9802\nLabels: e2e-framework=kubectl\n e2e-run=62618ab6-e8a5-4484-b827-2fa21745f237\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 13:35:22.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9802" for this suite. Dec 17 13:35:46.603: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 13:35:46.736: INFO: namespace kubectl-9802 deletion completed in 24.169272578s • [SLOW TEST:34.785 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 13:35:46.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Dec 17 13:35:46.924: INFO: Waiting up to 5m0s for pod "downward-api-053aefda-7283-4333-af98-5a4f70609744" in namespace "downward-api-4138" to be "success or failure" Dec 17 13:35:46.931: INFO: Pod "downward-api-053aefda-7283-4333-af98-5a4f70609744": Phase="Pending", Reason="", readiness=false. Elapsed: 7.103668ms Dec 17 13:35:48.941: INFO: Pod "downward-api-053aefda-7283-4333-af98-5a4f70609744": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016857737s Dec 17 13:35:50.972: INFO: Pod "downward-api-053aefda-7283-4333-af98-5a4f70609744": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048073346s Dec 17 13:35:52.991: INFO: Pod "downward-api-053aefda-7283-4333-af98-5a4f70609744": Phase="Pending", Reason="", readiness=false. Elapsed: 6.067158123s Dec 17 13:35:55.006: INFO: Pod "downward-api-053aefda-7283-4333-af98-5a4f70609744": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.08223005s STEP: Saw pod success Dec 17 13:35:55.007: INFO: Pod "downward-api-053aefda-7283-4333-af98-5a4f70609744" satisfied condition "success or failure" Dec 17 13:35:55.015: INFO: Trying to get logs from node iruya-node pod downward-api-053aefda-7283-4333-af98-5a4f70609744 container dapi-container: STEP: delete the pod Dec 17 13:35:55.126: INFO: Waiting for pod downward-api-053aefda-7283-4333-af98-5a4f70609744 to disappear Dec 17 13:35:55.136: INFO: Pod downward-api-053aefda-7283-4333-af98-5a4f70609744 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 13:35:55.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4138" for this suite. Dec 17 13:36:01.163: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 13:36:01.262: INFO: namespace downward-api-4138 deletion completed in 6.121374238s • [SLOW TEST:14.526 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 13:36:01.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W1217 13:36:04.661192 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Dec 17 13:36:04.661: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 13:36:04.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1650" for this suite. Dec 17 13:36:10.925: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 13:36:11.041: INFO: namespace gc-1650 deletion completed in 6.373490838s • [SLOW TEST:9.778 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 13:36:11.042: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 17 13:36:11.245: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Dec 17 13:36:16.255: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Dec 17 13:36:18.303: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Dec 17 13:36:28.388: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-5760,SelfLink:/apis/apps/v1/namespaces/deployment-5760/deployments/test-cleanup-deployment,UID:aff02fd5-eeb8-4daa-83e9-8a926f67af7d,ResourceVersion:17014796,Generation:1,CreationTimestamp:2019-12-17 13:36:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 1,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-12-17 13:36:18 +0000 UTC 2019-12-17 13:36:18 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-12-17 13:36:27 +0000 UTC 2019-12-17 13:36:18 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-cleanup-deployment-55bbcbc84c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Dec 17 13:36:28.397: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-5760,SelfLink:/apis/apps/v1/namespaces/deployment-5760/replicasets/test-cleanup-deployment-55bbcbc84c,UID:59c4037f-7333-4f92-8d5e-daa783ddfe73,ResourceVersion:17014785,Generation:1,CreationTimestamp:2019-12-17 13:36:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment aff02fd5-eeb8-4daa-83e9-8a926f67af7d 0xc00194d577 0xc00194d578}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Dec 17 13:36:28.402: INFO: Pod "test-cleanup-deployment-55bbcbc84c-cb2j2" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-cb2j2,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-5760,SelfLink:/api/v1/namespaces/deployment-5760/pods/test-cleanup-deployment-55bbcbc84c-cb2j2,UID:2659b621-0d74-4500-bbc9-2718fc55e428,ResourceVersion:17014784,Generation:0,CreationTimestamp:2019-12-17 13:36:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c 59c4037f-7333-4f92-8d5e-daa783ddfe73 0xc00194dbb7 0xc00194dbb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-shdw8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-shdw8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-shdw8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00194dc30} {node.kubernetes.io/unreachable Exists NoExecute 0xc00194dc50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:36:18 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:36:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:36:27 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:36:18 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2019-12-17 13:36:18 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-12-17 13:36:26 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://239f50b2d94b2e04342c1995c598148394749dc3e86a161f9c99d423d8436954}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 13:36:28.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5760" for this suite. Dec 17 13:36:36.440: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 13:36:36.579: INFO: namespace deployment-5760 deletion completed in 8.167868868s • [SLOW TEST:25.539 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 13:36:36.581: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Dec 17 13:36:36.721: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8453' Dec 17 13:36:39.093: INFO: stderr: "" Dec 17 13:36:39.093: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Dec 17 13:36:39.094: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8453' Dec 17 13:36:41.317: INFO: stderr: "" Dec 17 13:36:41.317: INFO: stdout: "update-demo-nautilus-dr77x update-demo-nautilus-znm6q " Dec 17 13:36:41.317: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dr77x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8453' Dec 17 13:36:41.471: INFO: stderr: "" Dec 17 13:36:41.472: INFO: stdout: "" Dec 17 13:36:41.472: INFO: update-demo-nautilus-dr77x is created but not running Dec 17 13:36:46.472: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8453' Dec 17 13:36:47.165: INFO: stderr: "" Dec 17 13:36:47.166: INFO: stdout: "update-demo-nautilus-dr77x update-demo-nautilus-znm6q " Dec 17 13:36:47.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dr77x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8453' Dec 17 13:36:47.290: INFO: stderr: "" Dec 17 13:36:47.290: INFO: stdout: "" Dec 17 13:36:47.290: INFO: update-demo-nautilus-dr77x is created but not running Dec 17 13:36:52.291: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8453' Dec 17 13:36:52.562: INFO: stderr: "" Dec 17 13:36:52.563: INFO: stdout: "update-demo-nautilus-dr77x update-demo-nautilus-znm6q " Dec 17 13:36:52.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dr77x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8453' Dec 17 13:36:52.744: INFO: stderr: "" Dec 17 13:36:52.744: INFO: stdout: "true" Dec 17 13:36:52.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dr77x -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8453' Dec 17 13:36:52.962: INFO: stderr: "" Dec 17 13:36:52.963: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 17 13:36:52.963: INFO: validating pod update-demo-nautilus-dr77x Dec 17 13:36:52.991: INFO: got data: { "image": "nautilus.jpg" } Dec 17 13:36:52.991: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 17 13:36:52.991: INFO: update-demo-nautilus-dr77x is verified up and running Dec 17 13:36:52.992: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-znm6q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8453' Dec 17 13:36:53.102: INFO: stderr: "" Dec 17 13:36:53.103: INFO: stdout: "true" Dec 17 13:36:53.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-znm6q -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8453' Dec 17 13:36:53.298: INFO: stderr: "" Dec 17 13:36:53.299: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 17 13:36:53.299: INFO: validating pod update-demo-nautilus-znm6q Dec 17 13:36:53.316: INFO: got data: { "image": "nautilus.jpg" } Dec 17 13:36:53.316: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 17 13:36:53.316: INFO: update-demo-nautilus-znm6q is verified up and running STEP: using delete to clean up resources Dec 17 13:36:53.317: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8453' Dec 17 13:36:53.429: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 17 13:36:53.429: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Dec 17 13:36:53.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8453' Dec 17 13:36:53.759: INFO: stderr: "No resources found.\n" Dec 17 13:36:53.759: INFO: stdout: "" Dec 17 13:36:53.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8453 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Dec 17 13:36:53.904: INFO: stderr: "" Dec 17 13:36:53.904: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 13:36:53.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8453" for this suite. Dec 17 13:37:16.614: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 13:37:16.787: INFO: namespace kubectl-8453 deletion completed in 22.873515128s • [SLOW TEST:40.206 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 13:37:16.789: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-dec201f9-46ed-4fd4-ae7f-332102a4257e STEP: Creating a pod to test consume secrets Dec 17 13:37:16.889: INFO: Waiting up to 5m0s for pod "pod-secrets-ec6c90f2-f32e-4291-abeb-62e0aa013e48" in namespace "secrets-2357" to be "success or failure" Dec 17 13:37:16.991: INFO: Pod "pod-secrets-ec6c90f2-f32e-4291-abeb-62e0aa013e48": Phase="Pending", Reason="", readiness=false. Elapsed: 101.320898ms Dec 17 13:37:18.998: INFO: Pod "pod-secrets-ec6c90f2-f32e-4291-abeb-62e0aa013e48": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108496051s Dec 17 13:37:21.004: INFO: Pod "pod-secrets-ec6c90f2-f32e-4291-abeb-62e0aa013e48": Phase="Pending", Reason="", readiness=false. Elapsed: 4.114574766s Dec 17 13:37:23.010: INFO: Pod "pod-secrets-ec6c90f2-f32e-4291-abeb-62e0aa013e48": Phase="Pending", Reason="", readiness=false. Elapsed: 6.120996491s Dec 17 13:37:25.017: INFO: Pod "pod-secrets-ec6c90f2-f32e-4291-abeb-62e0aa013e48": Phase="Pending", Reason="", readiness=false. Elapsed: 8.128054175s Dec 17 13:37:27.027: INFO: Pod "pod-secrets-ec6c90f2-f32e-4291-abeb-62e0aa013e48": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.137626321s STEP: Saw pod success Dec 17 13:37:27.027: INFO: Pod "pod-secrets-ec6c90f2-f32e-4291-abeb-62e0aa013e48" satisfied condition "success or failure" Dec 17 13:37:27.030: INFO: Trying to get logs from node iruya-node pod pod-secrets-ec6c90f2-f32e-4291-abeb-62e0aa013e48 container secret-volume-test: STEP: delete the pod Dec 17 13:37:27.071: INFO: Waiting for pod pod-secrets-ec6c90f2-f32e-4291-abeb-62e0aa013e48 to disappear Dec 17 13:37:27.116: INFO: Pod pod-secrets-ec6c90f2-f32e-4291-abeb-62e0aa013e48 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 13:37:27.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2357" for this suite. Dec 17 13:37:33.171: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 13:37:33.337: INFO: namespace secrets-2357 deletion completed in 6.214970128s • [SLOW TEST:16.549 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 13:37:33.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-8957 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating stateful set ss in namespace statefulset-8957 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-8957 Dec 17 13:37:33.470: INFO: Found 0 stateful pods, waiting for 1 Dec 17 13:37:43.483: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Dec 17 13:37:43.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8957 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 17 13:37:44.486: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" Dec 17 13:37:44.487: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 17 13:37:44.487: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 17 13:37:44.504: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Dec 17 13:37:54.521: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Dec 17 13:37:54.521: INFO: Waiting for statefulset status.replicas updated to 0 Dec 17 13:37:54.570: INFO: POD NODE PHASE GRACE CONDITIONS Dec 17 13:37:54.570: INFO: ss-0 iruya-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:37:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:37:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:37:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:37:33 +0000 UTC }] Dec 17 13:37:54.570: INFO: Dec 17 13:37:54.570: INFO: StatefulSet ss has not reached scale 3, at 1 Dec 17 13:37:56.189: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.96529185s Dec 17 13:37:57.425: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.346766245s Dec 17 13:37:58.491: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.109564528s Dec 17 13:37:59.507: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.043924609s Dec 17 13:38:01.861: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.027864007s Dec 17 13:38:02.891: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.673650468s Dec 17 13:38:04.161: INFO: Verifying statefulset ss doesn't scale past 3 for another 644.77467ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-8957 Dec 17 13:38:05.183: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8957 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 13:38:05.777: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n" Dec 17 13:38:05.777: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 17 13:38:05.777: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 17 13:38:05.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8957 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 13:38:06.546: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Dec 17 13:38:06.546: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 17 13:38:06.546: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 17 13:38:06.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8957 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 13:38:07.095: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Dec 17 13:38:07.095: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 17 13:38:07.095: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 17 13:38:07.105: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Dec 17 13:38:07.105: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Dec 17 13:38:07.105: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Dec 17 13:38:07.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8957 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 17 13:38:07.628: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" Dec 17 13:38:07.628: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 17 13:38:07.628: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 17 13:38:07.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8957 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 17 13:38:08.097: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" Dec 17 13:38:08.097: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 17 13:38:08.097: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 17 13:38:08.098: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8957 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 17 13:38:08.868: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" Dec 17 13:38:08.869: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 17 13:38:08.869: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 17 13:38:08.869: INFO: Waiting for statefulset status.replicas updated to 0 Dec 17 13:38:08.894: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Dec 17 13:38:18.911: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Dec 17 13:38:18.911: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Dec 17 13:38:18.911: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Dec 17 13:38:18.935: INFO: POD NODE PHASE GRACE CONDITIONS Dec 17 13:38:18.935: INFO: ss-0 iruya-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:37:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:38:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:38:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:37:33 +0000 UTC }] Dec 17 13:38:18.935: INFO: ss-1 iruya-server-sfge57q7djm7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:37:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:38:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:38:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:37:54 +0000 UTC }] Dec 17 13:38:18.935: INFO: ss-2 iruya-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:37:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:38:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:38:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:37:54 +0000 UTC }] Dec 17 13:38:18.935: INFO: Dec 17 13:38:18.935: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 17 13:38:20.588: INFO: POD NODE PHASE GRACE CONDITIONS Dec 17 13:38:20.589: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:37:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:38:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:38:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:37:33 +0000 UTC }] Dec 17 13:38:20.589: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:37:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:38:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:38:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:37:54 +0000 UTC }] Dec 17 13:38:20.589: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:37:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:38:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:38:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:37:54 +0000 UTC }] Dec 17 13:38:20.589: INFO: Dec 17 13:38:20.589: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 17 13:38:21.606: INFO: POD NODE PHASE GRACE CONDITIONS Dec 17 13:38:21.606: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:37:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:38:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:38:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:37:33 +0000 UTC }] Dec 17 13:38:21.606: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:37:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:38:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:38:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:37:54 +0000 UTC }] Dec 17 13:38:21.606: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:37:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:38:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:38:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:37:54 +0000 UTC }] Dec 17 13:38:21.606: INFO: Dec 17 13:38:21.606: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 17 13:38:24.751: INFO: POD NODE PHASE GRACE CONDITIONS Dec 17 13:38:24.751: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:37:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:38:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:38:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:37:33 +0000 UTC }] Dec 17 13:38:24.751: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:37:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:38:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:38:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:37:54 +0000 UTC }] Dec 17 13:38:24.752: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:37:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:38:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:38:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:37:54 +0000 UTC }] Dec 17 13:38:24.752: INFO: Dec 17 13:38:24.752: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 17 13:38:25.762: INFO: POD NODE PHASE GRACE CONDITIONS Dec 17 13:38:25.762: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:37:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:38:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:38:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:37:33 +0000 UTC }] Dec 17 13:38:25.762: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:37:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:38:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:38:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:37:54 +0000 UTC }] Dec 17 13:38:25.762: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:37:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:38:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:38:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:37:54 +0000 UTC }] Dec 17 13:38:25.762: INFO: Dec 17 13:38:25.762: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 17 13:38:26.772: INFO: POD NODE PHASE GRACE CONDITIONS Dec 17 13:38:26.773: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:37:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:38:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:38:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:37:33 +0000 UTC }] Dec 17 13:38:26.773: INFO: ss-1 iruya-server-sfge57q7djm7 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:37:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:38:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:38:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:37:54 +0000 UTC }] Dec 17 13:38:26.773: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:37:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:38:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:38:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:37:54 +0000 UTC }] Dec 17 13:38:26.773: INFO: Dec 17 13:38:26.773: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 17 13:38:27.786: INFO: POD NODE PHASE GRACE CONDITIONS Dec 17 13:38:27.786: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:37:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:38:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:38:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:37:33 +0000 UTC }] Dec 17 13:38:27.787: INFO: ss-1 iruya-server-sfge57q7djm7 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:37:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:38:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:38:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:37:54 +0000 UTC }] Dec 17 13:38:27.787: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:37:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:38:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:38:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:37:54 +0000 UTC }] Dec 17 13:38:27.787: INFO: Dec 17 13:38:27.787: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 17 13:38:28.795: INFO: POD NODE PHASE GRACE CONDITIONS Dec 17 13:38:28.795: INFO: ss-0 iruya-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:37:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:38:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:38:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:37:33 +0000 UTC }] Dec 17 13:38:28.796: INFO: ss-1 iruya-server-sfge57q7djm7 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:37:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:38:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:38:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:37:54 +0000 UTC }] Dec 17 13:38:28.796: INFO: ss-2 iruya-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:37:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:38:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:38:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:37:54 +0000 UTC }] Dec 17 13:38:28.796: INFO: Dec 17 13:38:28.796: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8957 Dec 17 13:38:29.809: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8957 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 13:38:30.087: INFO: rc: 1 Dec 17 13:38:30.088: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8957 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc00260ed80 exit status 1 true [0xc000e3b080 0xc000e3b180 0xc000e3b328] [0xc000e3b080 0xc000e3b180 0xc000e3b328] [0xc000e3b0e0 0xc000e3b260] [0xba6c50 0xba6c50] 0xc0024ffaa0 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Dec 17 13:38:40.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8957 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 13:38:40.314: INFO: rc: 1 Dec 17 13:38:40.315: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8957 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc001bb91a0 exit status 1 true [0xc001d002e0 0xc001d002f8 0xc001d00310] [0xc001d002e0 0xc001d002f8 0xc001d00310] [0xc001d002f0 0xc001d00308] [0xba6c50 0xba6c50] 0xc002863980 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Dec 17 13:38:50.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8957 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 13:38:50.502: INFO: rc: 1 Dec 17 13:38:50.503: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8957 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002500240 exit status 1 true [0xc000944ea8 0xc000944fc0 0xc0009450a8] [0xc000944ea8 0xc000944fc0 0xc0009450a8] [0xc000944fa8 0xc000945068] [0xba6c50 0xba6c50] 0xc0027d6780 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Dec 17 13:39:00.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8957 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 13:39:00.651: INFO: rc: 1 Dec 17 13:39:00.652: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8957 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0024c9230 exit status 1 true [0xc000dca348 0xc000dca3c8 0xc000dca468] [0xc000dca348 0xc000dca3c8 0xc000dca468] [0xc000dca390 0xc000dca438] [0xba6c50 0xba6c50] 0xc00249b980 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Dec 17 13:39:10.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8957 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 13:39:10.795: INFO: rc: 1 Dec 17 13:39:10.795: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8957 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0024c92f0 exit status 1 true [0xc000dca480 0xc000dca510 0xc000dca560] [0xc000dca480 0xc000dca510 0xc000dca560] [0xc000dca4c8 0xc000dca550] [0xba6c50 0xba6c50] 0xc0026fc060 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Dec 17 13:39:20.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8957 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 13:39:20.979: INFO: rc: 1 Dec 17 13:39:20.980: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8957 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0024c93e0 exit status 1 true [0xc000dca578 0xc000dca5e8 0xc000dca618] [0xc000dca578 0xc000dca5e8 0xc000dca618] [0xc000dca5b0 0xc000dca608] [0xba6c50 0xba6c50] 0xc0026fcea0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Dec 17 13:39:30.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8957 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 13:39:31.113: INFO: rc: 1 Dec 17 13:39:31.114: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8957 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc001bb9290 exit status 1 true [0xc001d00318 0xc001d00330 0xc001d00348] [0xc001d00318 0xc001d00330 0xc001d00348] [0xc001d00328 0xc001d00340] [0xba6c50 0xba6c50] 0xc002863ce0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Dec 17 13:39:41.115: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8957 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 13:39:41.242: INFO: rc: 1 Dec 17 13:39:41.243: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8957 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002500330 exit status 1 true [0xc0009450b8 0xc000945178 0xc000945210] [0xc0009450b8 0xc000945178 0xc000945210] [0xc000945128 0xc0009451e0] [0xba6c50 0xba6c50] 0xc0027d6ae0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Dec 17 13:39:51.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8957 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 13:39:51.498: INFO: rc: 1 Dec 17 13:39:51.498: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8957 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc001bb93b0 exit status 1 true [0xc001d00350 0xc001d00368 0xc001d00380] [0xc001d00350 0xc001d00368 0xc001d00380] [0xc001d00360 0xc001d00378] [0xba6c50 0xba6c50] 0xc002711080 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Dec 17 13:40:01.499: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8957 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 13:40:01.659: INFO: rc: 1 Dec 17 13:40:01.659: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8957 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002644090 exit status 1 true [0xc0000eafb0 0xc0000eb260 0xc0000eb338] [0xc0000eafb0 0xc0000eb260 0xc0000eb338] [0xc0000eb180 0xc0000eb300] [0xba6c50 0xba6c50] 0xc00249a660 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Dec 17 13:40:11.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8957 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 13:40:11.878: INFO: rc: 1 Dec 17 13:40:11.879: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8957 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc00245c090 exit status 1 true [0xc000374288 0xc000e3a218 0xc000e3a2e8] [0xc000374288 0xc000e3a218 0xc000e3a2e8] [0xc000e3a158 0xc000e3a2c8] [0xba6c50 0xba6c50] 0xc0028622a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Dec 17 13:40:21.880: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8957 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 13:40:22.017: INFO: rc: 1 Dec 17 13:40:22.018: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8957 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002644150 exit status 1 true [0xc0000eb370 0xc0000eb4c8 0xc0000eb690] [0xc0000eb370 0xc0000eb4c8 0xc0000eb690] [0xc0000eb498 0xc0000eb688] [0xba6c50 0xba6c50] 0xc00249ad20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Dec 17 13:40:32.018: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8957 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 13:40:32.251: INFO: rc: 1 Dec 17 13:40:32.251: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8957 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002c040f0 exit status 1 true [0xc000dca000 0xc000dca0e0 0xc000dca178] [0xc000dca000 0xc000dca0e0 0xc000dca178] [0xc000dca050 0xc000dca158] [0xba6c50 0xba6c50] 0xc002a5aae0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Dec 17 13:40:42.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8957 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 13:40:42.457: INFO: rc: 1 Dec 17 13:40:42.458: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8957 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002644210 exit status 1 true [0xc0000eb730 0xc0000eb8c8 0xc0000ebad0] [0xc0000eb730 0xc0000eb8c8 0xc0000ebad0] [0xc0000eb7e8 0xc0000eb990] [0xba6c50 0xba6c50] 0xc00249b4a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Dec 17 13:40:52.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8957 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 13:40:52.672: INFO: rc: 1 Dec 17 13:40:52.672: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8957 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002c041b0 exit status 1 true [0xc000dca1a8 0xc000dca1f0 0xc000dca250] [0xc000dca1a8 0xc000dca1f0 0xc000dca250] [0xc000dca1d0 0xc000dca228] [0xba6c50 0xba6c50] 0xc002a5b2c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Dec 17 13:41:02.673: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8957 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 13:41:02.882: INFO: rc: 1 Dec 17 13:41:02.883: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8957 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc000b4a120 exit status 1 true [0xc001d00008 0xc001d00038 0xc001d00070] [0xc001d00008 0xc001d00038 0xc001d00070] [0xc001d00028 0xc001d00058] [0xba6c50 0xba6c50] 0xc0024fe2a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Dec 17 13:41:12.883: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8957 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 13:41:13.034: INFO: rc: 1 Dec 17 13:41:13.034: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8957 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002c042d0 exit status 1 true [0xc000dca270 0xc000dca2e0 0xc000dca368] [0xc000dca270 0xc000dca2e0 0xc000dca368] [0xc000dca298 0xc000dca348] [0xba6c50 0xba6c50] 0xc002a5b860 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Dec 17 13:41:23.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8957 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 13:41:23.208: INFO: rc: 1 Dec 17 13:41:23.210: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8957 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002c043c0 exit status 1 true [0xc000dca390 0xc000dca438 0xc000dca488] [0xc000dca390 0xc000dca438 0xc000dca488] [0xc000dca3f0 0xc000dca480] [0xba6c50 0xba6c50] 0xc0026fc360 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Dec 17 13:41:33.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8957 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 13:41:33.381: INFO: rc: 1 Dec 17 13:41:33.382: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8957 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc00245c150 exit status 1 true [0xc000e3a338 0xc000e3a4a0 0xc000e3a740] [0xc000e3a338 0xc000e3a4a0 0xc000e3a740] [0xc000e3a490 0xc000e3a670] [0xba6c50 0xba6c50] 0xc002862600 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Dec 17 13:41:43.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8957 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 13:41:43.565: INFO: rc: 1 Dec 17 13:41:43.566: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8957 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc000b4a210 exit status 1 true [0xc001d00080 0xc001d000b0 0xc001d000f0] [0xc001d00080 0xc001d000b0 0xc001d000f0] [0xc001d000a0 0xc001d000d8] [0xba6c50 0xba6c50] 0xc0024fe600 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Dec 17 13:41:53.567: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8957 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 13:41:53.822: INFO: rc: 1 Dec 17 13:41:53.823: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8957 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc000b4a2d0 exit status 1 true [0xc001d00100 0xc001d00158 0xc001d00170] [0xc001d00100 0xc001d00158 0xc001d00170] [0xc001d00150 0xc001d00168] [0xba6c50 0xba6c50] 0xc0024fe960 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Dec 17 13:42:03.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8957 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 13:42:03.990: INFO: rc: 1 Dec 17 13:42:03.991: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8957 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0026440c0 exit status 1 true [0xc0000eaf90 0xc0000eb180 0xc0000eb300] [0xc0000eaf90 0xc0000eb180 0xc0000eb300] [0xc0000eb0d0 0xc0000eb2b0] [0xba6c50 0xba6c50] 0xc002a5aae0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Dec 17 13:42:13.992: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8957 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 13:42:14.186: INFO: rc: 1 Dec 17 13:42:14.187: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8957 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002c04090 exit status 1 true [0xc000dca000 0xc000dca0e0 0xc000dca178] [0xc000dca000 0xc000dca0e0 0xc000dca178] [0xc000dca050 0xc000dca158] [0xba6c50 0xba6c50] 0xc00249a660 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Dec 17 13:42:24.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8957 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 13:42:24.354: INFO: rc: 1 Dec 17 13:42:24.355: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8957 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002644240 exit status 1 true [0xc0000eb338 0xc0000eb498 0xc0000eb688] [0xc0000eb338 0xc0000eb498 0xc0000eb688] [0xc0000eb3f8 0xc0000eb580] [0xba6c50 0xba6c50] 0xc002a5b2c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Dec 17 13:42:34.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8957 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 13:42:34.676: INFO: rc: 1 Dec 17 13:42:34.677: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8957 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc00245c0c0 exit status 1 true [0xc001d00008 0xc001d00038 0xc001d00070] [0xc001d00008 0xc001d00038 0xc001d00070] [0xc001d00028 0xc001d00058] [0xba6c50 0xba6c50] 0xc0026fcd80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Dec 17 13:42:44.678: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8957 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 13:42:44.886: INFO: rc: 1 Dec 17 13:42:44.887: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8957 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002c041e0 exit status 1 true [0xc000dca1a8 0xc000dca1f0 0xc000dca250] [0xc000dca1a8 0xc000dca1f0 0xc000dca250] [0xc000dca1d0 0xc000dca228] [0xba6c50 0xba6c50] 0xc00249ad20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Dec 17 13:42:54.892: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8957 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 13:42:55.055: INFO: rc: 1 Dec 17 13:42:55.056: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8957 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002644390 exit status 1 true [0xc0000eb690 0xc0000eb7e8 0xc0000eb990] [0xc0000eb690 0xc0000eb7e8 0xc0000eb990] [0xc0000eb780 0xc0000eb968] [0xba6c50 0xba6c50] 0xc002a5b860 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Dec 17 13:43:05.057: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8957 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 13:43:05.221: INFO: rc: 1 Dec 17 13:43:05.222: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8957 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0026444b0 exit status 1 true [0xc0000ebad0 0xc0000ebc28 0xc0000ebdd8] [0xc0000ebad0 0xc0000ebc28 0xc0000ebdd8] [0xc0000ebbb8 0xc0000ebd58] [0xba6c50 0xba6c50] 0xc0024fe0c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Dec 17 13:43:15.222: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8957 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 13:43:15.457: INFO: rc: 1 Dec 17 13:43:15.458: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8957 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc000b4a0f0 exit status 1 true [0xc000e3a000 0xc000e3a258 0xc000e3a338] [0xc000e3a000 0xc000e3a258 0xc000e3a338] [0xc000e3a218 0xc000e3a2e8] [0xba6c50 0xba6c50] 0xc0028622a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Dec 17 13:43:25.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8957 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 13:43:25.673: INFO: rc: 1 Dec 17 13:43:25.674: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8957 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc000b4a1e0 exit status 1 true [0xc000e3a410 0xc000e3a618 0xc000e3a870] [0xc000e3a410 0xc000e3a618 0xc000e3a870] [0xc000e3a4a0 0xc000e3a740] [0xba6c50 0xba6c50] 0xc002862600 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Dec 17 13:43:35.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8957 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 13:43:35.920: INFO: rc: 1 Dec 17 13:43:35.921: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: Dec 17 13:43:35.921: INFO: Scaling statefulset ss to 0 Dec 17 13:43:35.948: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Dec 17 13:43:35.951: INFO: Deleting all statefulset in ns statefulset-8957 Dec 17 13:43:35.954: INFO: Scaling statefulset ss to 0 Dec 17 13:43:35.963: INFO: Waiting for statefulset status.replicas updated to 0 Dec 17 13:43:35.966: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 13:43:36.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8957" for this suite. Dec 17 13:43:44.067: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 13:43:44.220: INFO: namespace statefulset-8957 deletion completed in 8.176460168s • [SLOW TEST:370.882 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 13:43:44.221: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Dec 17 13:43:52.359: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-9d3414c5-cc94-4cb7-976d-3238c16a6cbe,GenerateName:,Namespace:events-5288,SelfLink:/api/v1/namespaces/events-5288/pods/send-events-9d3414c5-cc94-4cb7-976d-3238c16a6cbe,UID:06941f6d-8538-438a-baf1-801469cec4f6,ResourceVersion:17015664,Generation:0,CreationTimestamp:2019-12-17 13:43:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 328884555,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mvptq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mvptq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-mvptq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00194d7d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00194d7f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:43:44 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:43:52 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:43:52 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 13:43:44 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2019-12-17 13:43:44 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2019-12-17 13:43:51 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://bd2a66100c91a1fc7ed9c03d289e3efc97a06362a259b65c3d0af1d76ae610c7}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Dec 17 13:43:54.371: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Dec 17 13:43:56.504: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 13:43:56.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-5288" for this suite. Dec 17 13:44:38.559: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 13:44:38.681: INFO: namespace events-5288 deletion completed in 42.146406003s • [SLOW TEST:54.460 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 13:44:38.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-65e76908-071c-433d-92eb-e8fb1a345031 STEP: Creating a pod to test consume secrets Dec 17 13:44:38.949: INFO: Waiting up to 5m0s for pod "pod-secrets-534d71c0-d7a4-4505-8cc3-b94ef09bfcdb" in namespace "secrets-793" to be "success or failure" Dec 17 13:44:38.974: INFO: Pod "pod-secrets-534d71c0-d7a4-4505-8cc3-b94ef09bfcdb": Phase="Pending", Reason="", readiness=false. Elapsed: 24.043229ms Dec 17 13:44:40.981: INFO: Pod "pod-secrets-534d71c0-d7a4-4505-8cc3-b94ef09bfcdb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031252396s Dec 17 13:44:42.989: INFO: Pod "pod-secrets-534d71c0-d7a4-4505-8cc3-b94ef09bfcdb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039493767s Dec 17 13:44:45.004: INFO: Pod "pod-secrets-534d71c0-d7a4-4505-8cc3-b94ef09bfcdb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054737522s Dec 17 13:44:47.012: INFO: Pod "pod-secrets-534d71c0-d7a4-4505-8cc3-b94ef09bfcdb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.062299313s STEP: Saw pod success Dec 17 13:44:47.012: INFO: Pod "pod-secrets-534d71c0-d7a4-4505-8cc3-b94ef09bfcdb" satisfied condition "success or failure" Dec 17 13:44:47.015: INFO: Trying to get logs from node iruya-node pod pod-secrets-534d71c0-d7a4-4505-8cc3-b94ef09bfcdb container secret-volume-test: STEP: delete the pod Dec 17 13:44:47.090: INFO: Waiting for pod pod-secrets-534d71c0-d7a4-4505-8cc3-b94ef09bfcdb to disappear Dec 17 13:44:47.153: INFO: Pod pod-secrets-534d71c0-d7a4-4505-8cc3-b94ef09bfcdb no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 13:44:47.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-793" for this suite. Dec 17 13:44:53.217: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 13:44:53.384: INFO: namespace secrets-793 deletion completed in 6.222240522s • [SLOW TEST:14.703 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 13:44:53.385: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-80226866-ead3-46aa-aa68-1c5379bd6cc6 STEP: Creating a pod to test consume configMaps Dec 17 13:44:53.582: INFO: Waiting up to 5m0s for pod "pod-configmaps-2209cc21-1e80-43f2-ab08-100a5ae1f52b" in namespace "configmap-2105" to be "success or failure" Dec 17 13:44:53.638: INFO: Pod "pod-configmaps-2209cc21-1e80-43f2-ab08-100a5ae1f52b": Phase="Pending", Reason="", readiness=false. Elapsed: 55.285154ms Dec 17 13:44:55.646: INFO: Pod "pod-configmaps-2209cc21-1e80-43f2-ab08-100a5ae1f52b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064041012s Dec 17 13:44:57.678: INFO: Pod "pod-configmaps-2209cc21-1e80-43f2-ab08-100a5ae1f52b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.095662908s Dec 17 13:44:59.686: INFO: Pod "pod-configmaps-2209cc21-1e80-43f2-ab08-100a5ae1f52b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.104031641s Dec 17 13:45:01.734: INFO: Pod "pod-configmaps-2209cc21-1e80-43f2-ab08-100a5ae1f52b": Phase="Running", Reason="", readiness=true. Elapsed: 8.151987507s Dec 17 13:45:03.801: INFO: Pod "pod-configmaps-2209cc21-1e80-43f2-ab08-100a5ae1f52b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.218909127s STEP: Saw pod success Dec 17 13:45:03.802: INFO: Pod "pod-configmaps-2209cc21-1e80-43f2-ab08-100a5ae1f52b" satisfied condition "success or failure" Dec 17 13:45:03.807: INFO: Trying to get logs from node iruya-node pod pod-configmaps-2209cc21-1e80-43f2-ab08-100a5ae1f52b container configmap-volume-test: STEP: delete the pod Dec 17 13:45:04.851: INFO: Waiting for pod pod-configmaps-2209cc21-1e80-43f2-ab08-100a5ae1f52b to disappear Dec 17 13:45:04.865: INFO: Pod pod-configmaps-2209cc21-1e80-43f2-ab08-100a5ae1f52b no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 13:45:04.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2105" for this suite. Dec 17 13:45:10.913: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 13:45:11.045: INFO: namespace configmap-2105 deletion completed in 6.164352205s • [SLOW TEST:17.660 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 13:45:11.046: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 17 13:45:11.153: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Dec 17 13:45:13.266: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 13:45:14.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9976" for this suite. Dec 17 13:45:22.553: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 13:45:24.009: INFO: namespace replication-controller-9976 deletion completed in 9.717029459s • [SLOW TEST:12.963 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 13:45:24.012: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 13:45:34.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-8789" for this suite. Dec 17 13:45:40.501: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 13:45:40.639: INFO: namespace emptydir-wrapper-8789 deletion completed in 6.176577794s • [SLOW TEST:16.628 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 13:45:40.641: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override arguments Dec 17 13:45:40.834: INFO: Waiting up to 5m0s for pod "client-containers-d131c077-40f5-46e3-9e25-a4012ff38175" in namespace "containers-7355" to be "success or failure" Dec 17 13:45:40.897: INFO: Pod "client-containers-d131c077-40f5-46e3-9e25-a4012ff38175": Phase="Pending", Reason="", readiness=false. Elapsed: 62.636067ms Dec 17 13:45:42.907: INFO: Pod "client-containers-d131c077-40f5-46e3-9e25-a4012ff38175": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072190719s Dec 17 13:45:44.911: INFO: Pod "client-containers-d131c077-40f5-46e3-9e25-a4012ff38175": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076630192s Dec 17 13:45:46.985: INFO: Pod "client-containers-d131c077-40f5-46e3-9e25-a4012ff38175": Phase="Pending", Reason="", readiness=false. Elapsed: 6.150831072s Dec 17 13:45:49.037: INFO: Pod "client-containers-d131c077-40f5-46e3-9e25-a4012ff38175": Phase="Pending", Reason="", readiness=false. Elapsed: 8.202575057s Dec 17 13:45:51.045: INFO: Pod "client-containers-d131c077-40f5-46e3-9e25-a4012ff38175": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.210359246s STEP: Saw pod success Dec 17 13:45:51.045: INFO: Pod "client-containers-d131c077-40f5-46e3-9e25-a4012ff38175" satisfied condition "success or failure" Dec 17 13:45:51.049: INFO: Trying to get logs from node iruya-node pod client-containers-d131c077-40f5-46e3-9e25-a4012ff38175 container test-container: STEP: delete the pod Dec 17 13:45:51.144: INFO: Waiting for pod client-containers-d131c077-40f5-46e3-9e25-a4012ff38175 to disappear Dec 17 13:45:51.156: INFO: Pod client-containers-d131c077-40f5-46e3-9e25-a4012ff38175 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 13:45:51.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7355" for this suite. Dec 17 13:45:57.215: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 13:45:57.357: INFO: namespace containers-7355 deletion completed in 6.192269899s • [SLOW TEST:16.716 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 13:45:57.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 17 13:45:57.514: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4691c423-04e4-417d-adaa-79988789ae0b" in namespace "projected-2200" to be "success or failure" Dec 17 13:45:57.528: INFO: Pod "downwardapi-volume-4691c423-04e4-417d-adaa-79988789ae0b": Phase="Pending", Reason="", readiness=false. Elapsed: 13.875839ms Dec 17 13:45:59.541: INFO: Pod "downwardapi-volume-4691c423-04e4-417d-adaa-79988789ae0b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027403509s Dec 17 13:46:01.551: INFO: Pod "downwardapi-volume-4691c423-04e4-417d-adaa-79988789ae0b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036895033s Dec 17 13:46:03.558: INFO: Pod "downwardapi-volume-4691c423-04e4-417d-adaa-79988789ae0b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044747666s Dec 17 13:46:05.570: INFO: Pod "downwardapi-volume-4691c423-04e4-417d-adaa-79988789ae0b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.056523199s Dec 17 13:46:07.578: INFO: Pod "downwardapi-volume-4691c423-04e4-417d-adaa-79988789ae0b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.064240646s STEP: Saw pod success Dec 17 13:46:07.578: INFO: Pod "downwardapi-volume-4691c423-04e4-417d-adaa-79988789ae0b" satisfied condition "success or failure" Dec 17 13:46:07.582: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-4691c423-04e4-417d-adaa-79988789ae0b container client-container: STEP: delete the pod Dec 17 13:46:07.693: INFO: Waiting for pod downwardapi-volume-4691c423-04e4-417d-adaa-79988789ae0b to disappear Dec 17 13:46:07.704: INFO: Pod downwardapi-volume-4691c423-04e4-417d-adaa-79988789ae0b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 13:46:07.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2200" for this suite. Dec 17 13:46:13.728: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 13:46:13.880: INFO: namespace projected-2200 deletion completed in 6.167726309s • [SLOW TEST:16.522 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 13:46:13.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test use defaults Dec 17 13:46:13.990: INFO: Waiting up to 5m0s for pod "client-containers-f5f53e70-92d4-4ea0-b0a3-bbd922388bba" in namespace "containers-9750" to be "success or failure" Dec 17 13:46:14.048: INFO: Pod "client-containers-f5f53e70-92d4-4ea0-b0a3-bbd922388bba": Phase="Pending", Reason="", readiness=false. Elapsed: 57.318483ms Dec 17 13:46:16.058: INFO: Pod "client-containers-f5f53e70-92d4-4ea0-b0a3-bbd922388bba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067332552s Dec 17 13:46:18.075: INFO: Pod "client-containers-f5f53e70-92d4-4ea0-b0a3-bbd922388bba": Phase="Pending", Reason="", readiness=false. Elapsed: 4.084410245s Dec 17 13:46:20.494: INFO: Pod "client-containers-f5f53e70-92d4-4ea0-b0a3-bbd922388bba": Phase="Pending", Reason="", readiness=false. Elapsed: 6.503113055s Dec 17 13:46:22.504: INFO: Pod "client-containers-f5f53e70-92d4-4ea0-b0a3-bbd922388bba": Phase="Pending", Reason="", readiness=false. Elapsed: 8.51338974s Dec 17 13:46:24.533: INFO: Pod "client-containers-f5f53e70-92d4-4ea0-b0a3-bbd922388bba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.542459656s STEP: Saw pod success Dec 17 13:46:24.534: INFO: Pod "client-containers-f5f53e70-92d4-4ea0-b0a3-bbd922388bba" satisfied condition "success or failure" Dec 17 13:46:24.549: INFO: Trying to get logs from node iruya-node pod client-containers-f5f53e70-92d4-4ea0-b0a3-bbd922388bba container test-container: STEP: delete the pod Dec 17 13:46:25.545: INFO: Waiting for pod client-containers-f5f53e70-92d4-4ea0-b0a3-bbd922388bba to disappear Dec 17 13:46:25.558: INFO: Pod client-containers-f5f53e70-92d4-4ea0-b0a3-bbd922388bba no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 13:46:25.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9750" for this suite. Dec 17 13:46:31.594: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 13:46:31.723: INFO: namespace containers-9750 deletion completed in 6.156030448s • [SLOW TEST:17.842 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 13:46:31.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-d6415402-2f17-4ab4-96f6-3521c5f7131f in namespace container-probe-571 Dec 17 13:46:39.925: INFO: Started pod busybox-d6415402-2f17-4ab4-96f6-3521c5f7131f in namespace container-probe-571 STEP: checking the pod's current state and verifying that restartCount is present Dec 17 13:46:39.933: INFO: Initial restart count of pod busybox-d6415402-2f17-4ab4-96f6-3521c5f7131f is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 13:50:41.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-571" for this suite. Dec 17 13:50:47.290: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 13:50:47.419: INFO: namespace container-probe-571 deletion completed in 6.164992433s • [SLOW TEST:255.696 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 13:50:47.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-d79e0c67-ca3c-4d23-86ff-db18764693b7 STEP: Creating a pod to test consume secrets Dec 17 13:50:47.573: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-edf0b708-713e-4c7b-a4f5-532d3eedf0e8" in namespace "projected-7410" to be "success or failure" Dec 17 13:50:47.594: INFO: Pod "pod-projected-secrets-edf0b708-713e-4c7b-a4f5-532d3eedf0e8": Phase="Pending", Reason="", readiness=false. Elapsed: 20.538769ms Dec 17 13:50:49.603: INFO: Pod "pod-projected-secrets-edf0b708-713e-4c7b-a4f5-532d3eedf0e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029260491s Dec 17 13:50:51.615: INFO: Pod "pod-projected-secrets-edf0b708-713e-4c7b-a4f5-532d3eedf0e8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041596749s Dec 17 13:50:53.630: INFO: Pod "pod-projected-secrets-edf0b708-713e-4c7b-a4f5-532d3eedf0e8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056196685s Dec 17 13:50:55.637: INFO: Pod "pod-projected-secrets-edf0b708-713e-4c7b-a4f5-532d3eedf0e8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.064159389s Dec 17 13:50:57.647: INFO: Pod "pod-projected-secrets-edf0b708-713e-4c7b-a4f5-532d3eedf0e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.07412668s STEP: Saw pod success Dec 17 13:50:57.648: INFO: Pod "pod-projected-secrets-edf0b708-713e-4c7b-a4f5-532d3eedf0e8" satisfied condition "success or failure" Dec 17 13:50:57.653: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-edf0b708-713e-4c7b-a4f5-532d3eedf0e8 container projected-secret-volume-test: STEP: delete the pod Dec 17 13:50:57.890: INFO: Waiting for pod pod-projected-secrets-edf0b708-713e-4c7b-a4f5-532d3eedf0e8 to disappear Dec 17 13:50:57.905: INFO: Pod pod-projected-secrets-edf0b708-713e-4c7b-a4f5-532d3eedf0e8 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 13:50:57.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7410" for this suite. Dec 17 13:51:04.002: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 13:51:04.087: INFO: namespace projected-7410 deletion completed in 6.17487713s • [SLOW TEST:16.668 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 13:51:04.088: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Dec 17 13:51:04.269: INFO: Waiting up to 5m0s for pod "pod-dd788d0f-d53a-474a-a7eb-d6c5fa612783" in namespace "emptydir-7957" to be "success or failure" Dec 17 13:51:04.285: INFO: Pod "pod-dd788d0f-d53a-474a-a7eb-d6c5fa612783": Phase="Pending", Reason="", readiness=false. Elapsed: 15.785596ms Dec 17 13:51:06.293: INFO: Pod "pod-dd788d0f-d53a-474a-a7eb-d6c5fa612783": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024517565s Dec 17 13:51:08.306: INFO: Pod "pod-dd788d0f-d53a-474a-a7eb-d6c5fa612783": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037346191s Dec 17 13:51:10.314: INFO: Pod "pod-dd788d0f-d53a-474a-a7eb-d6c5fa612783": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044950449s Dec 17 13:51:12.327: INFO: Pod "pod-dd788d0f-d53a-474a-a7eb-d6c5fa612783": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.058584537s STEP: Saw pod success Dec 17 13:51:12.328: INFO: Pod "pod-dd788d0f-d53a-474a-a7eb-d6c5fa612783" satisfied condition "success or failure" Dec 17 13:51:12.334: INFO: Trying to get logs from node iruya-node pod pod-dd788d0f-d53a-474a-a7eb-d6c5fa612783 container test-container: STEP: delete the pod Dec 17 13:51:12.398: INFO: Waiting for pod pod-dd788d0f-d53a-474a-a7eb-d6c5fa612783 to disappear Dec 17 13:51:12.402: INFO: Pod pod-dd788d0f-d53a-474a-a7eb-d6c5fa612783 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 13:51:12.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7957" for this suite. Dec 17 13:51:18.456: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 13:51:18.709: INFO: namespace emptydir-7957 deletion completed in 6.298718458s • [SLOW TEST:14.621 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 13:51:18.709: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Dec 17 13:51:18.876: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Dec 17 13:51:18.886: INFO: Waiting for terminating namespaces to be deleted... Dec 17 13:51:18.889: INFO: Logging pods the kubelet thinks is on node iruya-node before test Dec 17 13:51:18.910: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded) Dec 17 13:51:18.911: INFO: Container kube-proxy ready: true, restart count 0 Dec 17 13:51:18.911: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded) Dec 17 13:51:18.911: INFO: Container weave ready: true, restart count 0 Dec 17 13:51:18.911: INFO: Container weave-npc ready: true, restart count 0 Dec 17 13:51:18.911: INFO: Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test Dec 17 13:51:18.926: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded) Dec 17 13:51:18.926: INFO: Container kube-scheduler ready: true, restart count 7 Dec 17 13:51:18.926: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Dec 17 13:51:18.926: INFO: Container coredns ready: true, restart count 0 Dec 17 13:51:18.926: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded) Dec 17 13:51:18.926: INFO: Container etcd ready: true, restart count 0 Dec 17 13:51:18.926: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded) Dec 17 13:51:18.926: INFO: Container weave ready: true, restart count 0 Dec 17 13:51:18.926: INFO: Container weave-npc ready: true, restart count 0 Dec 17 13:51:18.926: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Dec 17 13:51:18.926: INFO: Container coredns ready: true, restart count 0 Dec 17 13:51:18.926: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded) Dec 17 13:51:18.926: INFO: Container kube-controller-manager ready: true, restart count 10 Dec 17 13:51:18.926: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded) Dec 17 13:51:18.926: INFO: Container kube-proxy ready: true, restart count 0 Dec 17 13:51:18.927: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded) Dec 17 13:51:18.927: INFO: Container kube-apiserver ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15e12cde8efaeb3c], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 13:51:19.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6519" for this suite. Dec 17 13:51:26.047: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 13:51:26.182: INFO: namespace sched-pred-6519 deletion completed in 6.215018092s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:7.473 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 13:51:26.183: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Dec 17 13:51:36.966: INFO: Successfully updated pod "annotationupdate8c4c7796-cb49-4aea-abf2-9baf52942872" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 13:51:39.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4604" for this suite. Dec 17 13:52:01.619: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 13:52:01.739: INFO: namespace projected-4604 deletion completed in 22.206669216s • [SLOW TEST:35.556 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 13:52:01.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-1154 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-1154 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-1154 Dec 17 13:52:01.911: INFO: Found 0 stateful pods, waiting for 1 Dec 17 13:52:11.924: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Dec 17 13:52:11.931: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1154 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 17 13:52:14.968: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" Dec 17 13:52:14.969: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 17 13:52:14.969: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 17 13:52:14.986: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Dec 17 13:52:14.986: INFO: Waiting for statefulset status.replicas updated to 0 Dec 17 13:52:15.023: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999795s Dec 17 13:52:16.037: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.982625078s Dec 17 13:52:17.051: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.969213525s Dec 17 13:52:18.060: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.955537367s Dec 17 13:52:19.154: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.946055433s Dec 17 13:52:20.160: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.852547213s Dec 17 13:52:21.172: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.846165346s Dec 17 13:52:22.181: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.834094986s Dec 17 13:52:23.191: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.825713866s Dec 17 13:52:24.201: INFO: Verifying statefulset ss doesn't scale past 1 for another 814.741968ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-1154 Dec 17 13:52:25.210: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1154 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 13:52:25.749: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n" Dec 17 13:52:25.749: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 17 13:52:25.749: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 17 13:52:25.829: INFO: Found 2 stateful pods, waiting for 3 Dec 17 13:52:35.840: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Dec 17 13:52:35.840: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Dec 17 13:52:35.840: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Dec 17 13:52:45.838: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Dec 17 13:52:45.838: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Dec 17 13:52:45.839: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Dec 17 13:52:45.848: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1154 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 17 13:52:46.338: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" Dec 17 13:52:46.338: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 17 13:52:46.338: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 17 13:52:46.338: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1154 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 17 13:52:46.837: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" Dec 17 13:52:46.837: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 17 13:52:46.837: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 17 13:52:46.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1154 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 17 13:52:47.409: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" Dec 17 13:52:47.409: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 17 13:52:47.410: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 17 13:52:47.410: INFO: Waiting for statefulset status.replicas updated to 0 Dec 17 13:52:47.418: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Dec 17 13:52:57.478: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Dec 17 13:52:57.478: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Dec 17 13:52:57.478: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Dec 17 13:52:57.500: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999996959s Dec 17 13:52:58.518: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.988258814s Dec 17 13:52:59.532: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.970461986s Dec 17 13:53:00.543: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.956694401s Dec 17 13:53:01.553: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.945670394s Dec 17 13:53:02.569: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.935785328s Dec 17 13:53:03.920: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.91992822s Dec 17 13:53:04.930: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.56859904s Dec 17 13:53:05.971: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.559178975s Dec 17 13:53:06.982: INFO: Verifying statefulset ss doesn't scale past 3 for another 518.158546ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-1154 Dec 17 13:53:07.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1154 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 13:53:08.623: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n" Dec 17 13:53:08.623: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 17 13:53:08.623: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 17 13:53:08.624: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1154 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 13:53:09.028: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n" Dec 17 13:53:09.028: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 17 13:53:09.028: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 17 13:53:09.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1154 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 13:53:09.466: INFO: rc: 126 Dec 17 13:53:09.467: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1154 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] cannot exec in a stopped state: unknown command terminated with exit code 126 [] 0xc002644c00 exit status 126 true [0xc001d00298 0xc001d002b0 0xc001d002c8] [0xc001d00298 0xc001d002b0 0xc001d002c8] [0xc001d002a8 0xc001d002c0] [0xba6c50 0xba6c50] 0xc002e68600 }: Command stdout: cannot exec in a stopped state: unknown stderr: command terminated with exit code 126 error: exit status 126 Dec 17 13:53:19.468: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1154 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 13:53:19.788: INFO: rc: 1 Dec 17 13:53:19.789: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1154 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc00245c240 exit status 1 true [0xc0019ec340 0xc0019ec358 0xc0019ec378] [0xc0019ec340 0xc0019ec358 0xc0019ec378] [0xc0019ec350 0xc0019ec370] [0xba6c50 0xba6c50] 0xc002a5b860 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Dec 17 13:53:29.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1154 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 13:53:29.956: INFO: rc: 1 Dec 17 13:53:29.957: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1154 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001bb9500 exit status 1 true [0xc001532408 0xc001532420 0xc001532438] [0xc001532408 0xc001532420 0xc001532438] [0xc001532418 0xc001532430] [0xba6c50 0xba6c50] 0xc0026cf080 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 17 13:53:39.957: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1154 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 13:53:40.165: INFO: rc: 1 Dec 17 13:53:40.166: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1154 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001bb95c0 exit status 1 true [0xc001532440 0xc001532458 0xc001532470] [0xc001532440 0xc001532458 0xc001532470] [0xc001532450 0xc001532468] [0xba6c50 0xba6c50] 0xc0026cf500 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 17 13:53:50.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1154 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 13:53:50.340: INFO: rc: 1 Dec 17 13:53:50.341: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1154 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001536210 exit status 1 true [0xc0022fc3d8 0xc0022fc3f8 0xc0022fc420] [0xc0022fc3d8 0xc0022fc3f8 0xc0022fc420] [0xc0022fc3f0 0xc0022fc410] [0xba6c50 0xba6c50] 0xc0026b9740 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 17 13:54:00.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1154 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 13:54:00.591: INFO: rc: 1 Dec 17 13:54:00.591: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1154 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0015362d0 exit status 1 true [0xc0022fc430 0xc0022fc488 0xc0022fc4d0] [0xc0022fc430 0xc0022fc488 0xc0022fc4d0] [0xc0022fc448 0xc0022fc4c0] [0xba6c50 0xba6c50] 0xc0027262a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 17 13:54:10.593: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1154 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 13:54:12.176: INFO: rc: 1 Dec 17 13:54:12.177: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1154 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001a5e090 exit status 1 true [0xc000e3a158 0xc000e3a2c8 0xc000e3a410] [0xc000e3a158 0xc000e3a2c8 0xc000e3a410] [0xc000e3a258 0xc000e3a338] [0xba6c50 0xba6c50] 0xc0026b8600 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 17 13:54:22.178: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1154 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 13:54:22.405: INFO: rc: 1 Dec 17 13:54:22.405: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1154 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00260e0c0 exit status 1 true [0xc0004b6008 0xc0004b6080 0xc0004b6120] [0xc0004b6008 0xc0004b6080 0xc0004b6120] [0xc0004b6068 0xc0004b6110] [0xba6c50 0xba6c50] 0xc0025d84e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 17 13:54:32.406: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1154 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 13:54:32.592: INFO: rc: 1 Dec 17 13:54:32.593: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1154 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002a6a0c0 exit status 1 true [0xc000374288 0xc001532020 0xc001532038] [0xc000374288 0xc001532020 0xc001532038] [0xc001532018 0xc001532030] [0xba6c50 0xba6c50] 0xc002448360 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 17 13:54:42.595: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1154 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 13:54:42.716: INFO: rc: 1 Dec 17 13:54:42.717: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1154 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00260e1b0 exit status 1 true [0xc0004b6128 0xc0004b61a8 0xc0004b6250] [0xc0004b6128 0xc0004b61a8 0xc0004b6250] [0xc0004b6180 0xc0004b6248] [0xba6c50 0xba6c50] 0xc0025d8fc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 17 13:54:52.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1154 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 13:54:52.887: INFO: rc: 1 Dec 17 13:54:52.887: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1154 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002a6a1b0 exit status 1 true [0xc001532040 0xc001532058 0xc001532070] [0xc001532040 0xc001532058 0xc001532070] [0xc001532050 0xc001532068] [0xba6c50 0xba6c50] 0xc002448840 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 17 13:55:02.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1154 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 13:55:03.052: INFO: rc: 1 Dec 17 13:55:03.053: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1154 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002c62090 exit status 1 true [0xc0019ec000 0xc0019ec020 0xc0019ec038] [0xc0019ec000 0xc0019ec020 0xc0019ec038] [0xc0019ec018 0xc0019ec030] [0xba6c50 0xba6c50] 0xc0025051a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 17 13:55:13.054: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1154 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 13:55:13.188: INFO: rc: 1 Dec 17 13:55:13.188: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1154 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001a5e180 exit status 1 true [0xc000e3a490 0xc000e3a670 0xc000e3a8b0] [0xc000e3a490 0xc000e3a670 0xc000e3a8b0] [0xc000e3a618 0xc000e3a870] [0xba6c50 0xba6c50] 0xc0026b8d80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 17 13:55:23.189: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1154 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 13:55:23.387: INFO: rc: 1 Dec 17 13:55:23.388: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1154 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001a5e240 exit status 1 true [0xc000e3a8b8 0xc000e3a9a8 0xc000e3aaf0] [0xc000e3a8b8 0xc000e3a9a8 0xc000e3aaf0] [0xc000e3a990 0xc000e3a9d8] [0xba6c50 0xba6c50] 0xc0026b9860 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 17 13:55:33.388: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1154 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 13:55:33.642: INFO: rc: 1 Dec 17 13:55:33.642: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1154 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002a6a2d0 exit status 1 true [0xc001532078 0xc001532090 0xc0015320a8] [0xc001532078 0xc001532090 0xc0015320a8] [0xc001532088 0xc0015320a0] [0xba6c50 0xba6c50] 0xc002448cc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 17 13:55:43.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1154 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 13:55:43.813: INFO: rc: 1 Dec 17 13:55:43.813: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1154 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00260e2a0 exit status 1 true [0xc0004b6260 0xc0004b62a8 0xc0004b62e8] [0xc0004b6260 0xc0004b62a8 0xc0004b62e8] [0xc0004b6290 0xc0004b62d8] [0xba6c50 0xba6c50] 0xc0025d9b00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 17 13:55:53.814: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1154 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 13:55:54.046: INFO: rc: 1 Dec 17 13:55:54.047: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1154 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001a5e330 exit status 1 true [0xc000e3ab38 0xc000e3ac90 0xc000e3ad90] [0xc000e3ab38 0xc000e3ac90 0xc000e3ad90] [0xc000e3ac48 0xc000e3ad40] [0xba6c50 0xba6c50] 0xc002429d40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 17 13:56:04.048: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1154 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 13:56:04.239: INFO: rc: 1 Dec 17 13:56:04.239: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1154 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002a6a000 exit status 1 true [0xc002f2e020 0xc002f2e038 0xc002f2e050] [0xc002f2e020 0xc002f2e038 0xc002f2e050] [0xc002f2e030 0xc002f2e048] [0xba6c50 0xba6c50] 0xc0025d8000 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 17 13:56:14.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1154 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 13:56:14.408: INFO: rc: 1 Dec 17 13:56:14.409: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1154 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002c620c0 exit status 1 true [0xc000374288 0xc0004b6068 0xc0004b6110] [0xc000374288 0xc0004b6068 0xc0004b6110] [0xc0004b6048 0xc0004b6098] [0xba6c50 0xba6c50] 0xc0026b8600 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 17 13:56:24.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1154 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 13:56:24.678: INFO: rc: 1 Dec 17 13:56:24.679: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1154 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002c621b0 exit status 1 true [0xc0004b6120 0xc0004b6180 0xc0004b6248] [0xc0004b6120 0xc0004b6180 0xc0004b6248] [0xc0004b6158 0xc0004b6228] [0xba6c50 0xba6c50] 0xc0026b8d80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 17 13:56:34.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1154 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 13:56:34.795: INFO: rc: 1 Dec 17 13:56:34.795: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1154 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002c62270 exit status 1 true [0xc0004b6250 0xc0004b6290 0xc0004b62d8] [0xc0004b6250 0xc0004b6290 0xc0004b62d8] [0xc0004b6280 0xc0004b62b8] [0xba6c50 0xba6c50] 0xc0026b9860 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 17 13:56:44.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1154 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 13:56:44.988: INFO: rc: 1 Dec 17 13:56:44.989: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1154 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002c62360 exit status 1 true [0xc0004b62e8 0xc0004b6350 0xc0004b6378] [0xc0004b62e8 0xc0004b6350 0xc0004b6378] [0xc0004b6320 0xc0004b6368] [0xba6c50 0xba6c50] 0xc002429d40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 17 13:56:54.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1154 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 13:56:55.148: INFO: rc: 1 Dec 17 13:56:55.149: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1154 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001a5e0c0 exit status 1 true [0xc0019ec000 0xc0019ec020 0xc0019ec038] [0xc0019ec000 0xc0019ec020 0xc0019ec038] [0xc0019ec018 0xc0019ec030] [0xba6c50 0xba6c50] 0xc0025d86c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 17 13:57:05.149: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1154 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 13:57:05.329: INFO: rc: 1 Dec 17 13:57:05.329: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1154 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002c62480 exit status 1 true [0xc0004b63a8 0xc0004b6420 0xc0004b6528] [0xc0004b63a8 0xc0004b6420 0xc0004b6528] [0xc0004b63e8 0xc0004b6508] [0xba6c50 0xba6c50] 0xc0025051a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 17 13:57:15.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1154 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 13:57:15.484: INFO: rc: 1 Dec 17 13:57:15.485: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1154 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002a6a180 exit status 1 true [0xc000e3a000 0xc000e3a258 0xc000e3a338] [0xc000e3a000 0xc000e3a258 0xc000e3a338] [0xc000e3a218 0xc000e3a2e8] [0xba6c50 0xba6c50] 0xc002448360 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 17 13:57:25.485: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1154 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 13:57:25.710: INFO: rc: 1 Dec 17 13:57:25.710: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1154 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002a6a270 exit status 1 true [0xc000e3a410 0xc000e3a618 0xc000e3a870] [0xc000e3a410 0xc000e3a618 0xc000e3a870] [0xc000e3a4a0 0xc000e3a740] [0xba6c50 0xba6c50] 0xc002448840 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 17 13:57:35.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1154 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 13:57:35.913: INFO: rc: 1 Dec 17 13:57:35.913: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1154 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002c625a0 exit status 1 true [0xc0004b6548 0xc0004b6570 0xc0004b6628] [0xc0004b6548 0xc0004b6570 0xc0004b6628] [0xc0004b6568 0xc0004b65e0] [0xba6c50 0xba6c50] 0xc002505d40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 17 13:57:45.914: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1154 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 13:57:46.075: INFO: rc: 1 Dec 17 13:57:46.075: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1154 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002a6a360 exit status 1 true [0xc000e3a8b0 0xc000e3a990 0xc000e3a9d8] [0xc000e3a8b0 0xc000e3a990 0xc000e3a9d8] [0xc000e3a950 0xc000e3a9c0] [0xba6c50 0xba6c50] 0xc002448cc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 17 13:57:56.076: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1154 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 13:57:56.250: INFO: rc: 1 Dec 17 13:57:56.250: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1154 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002a6a420 exit status 1 true [0xc000e3aaf0 0xc000e3ac48 0xc000e3ad40] [0xc000e3aaf0 0xc000e3ac48 0xc000e3ad40] [0xc000e3ac08 0xc000e3acf8] [0xba6c50 0xba6c50] 0xc0024491a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 17 13:58:06.251: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1154 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 13:58:06.478: INFO: rc: 1 Dec 17 13:58:06.479: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1154 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002c62090 exit status 1 true [0xc0004b6008 0xc0004b6080 0xc0004b6120] [0xc0004b6008 0xc0004b6080 0xc0004b6120] [0xc0004b6068 0xc0004b6110] [0xba6c50 0xba6c50] 0xc002429c80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 17 13:58:16.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1154 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 17 13:58:16.751: INFO: rc: 1 Dec 17 13:58:16.752: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: Dec 17 13:58:16.752: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Dec 17 13:58:16.802: INFO: Deleting all statefulset in ns statefulset-1154 Dec 17 13:58:16.811: INFO: Scaling statefulset ss to 0 Dec 17 13:58:16.829: INFO: Waiting for statefulset status.replicas updated to 0 Dec 17 13:58:16.831: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 13:58:16.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1154" for this suite. Dec 17 13:58:23.014: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 13:58:23.130: INFO: namespace statefulset-1154 deletion completed in 6.139723758s • [SLOW TEST:381.391 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 13:58:23.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test env composition Dec 17 13:58:23.286: INFO: Waiting up to 5m0s for pod "var-expansion-64293fd1-6229-49fd-b2dc-846a42792d3c" in namespace "var-expansion-3559" to be "success or failure" Dec 17 13:58:23.309: INFO: Pod "var-expansion-64293fd1-6229-49fd-b2dc-846a42792d3c": Phase="Pending", Reason="", readiness=false. Elapsed: 22.271718ms Dec 17 13:58:25.323: INFO: Pod "var-expansion-64293fd1-6229-49fd-b2dc-846a42792d3c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036440006s Dec 17 13:58:27.330: INFO: Pod "var-expansion-64293fd1-6229-49fd-b2dc-846a42792d3c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043766566s Dec 17 13:58:29.338: INFO: Pod "var-expansion-64293fd1-6229-49fd-b2dc-846a42792d3c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051784717s Dec 17 13:58:31.345: INFO: Pod "var-expansion-64293fd1-6229-49fd-b2dc-846a42792d3c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.059053771s Dec 17 13:58:33.354: INFO: Pod "var-expansion-64293fd1-6229-49fd-b2dc-846a42792d3c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.067627916s STEP: Saw pod success Dec 17 13:58:33.354: INFO: Pod "var-expansion-64293fd1-6229-49fd-b2dc-846a42792d3c" satisfied condition "success or failure" Dec 17 13:58:33.357: INFO: Trying to get logs from node iruya-node pod var-expansion-64293fd1-6229-49fd-b2dc-846a42792d3c container dapi-container: STEP: delete the pod Dec 17 13:58:33.448: INFO: Waiting for pod var-expansion-64293fd1-6229-49fd-b2dc-846a42792d3c to disappear Dec 17 13:58:33.455: INFO: Pod var-expansion-64293fd1-6229-49fd-b2dc-846a42792d3c no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 13:58:33.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3559" for this suite. Dec 17 13:58:39.540: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 13:58:39.682: INFO: namespace var-expansion-3559 deletion completed in 6.219222472s • [SLOW TEST:16.551 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 13:58:39.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Dec 17 13:59:01.945: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2167 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 17 13:59:01.945: INFO: >>> kubeConfig: /root/.kube/config Dec 17 13:59:02.423: INFO: Exec stderr: "" Dec 17 13:59:02.423: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2167 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 17 13:59:02.424: INFO: >>> kubeConfig: /root/.kube/config Dec 17 13:59:02.830: INFO: Exec stderr: "" Dec 17 13:59:02.831: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2167 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 17 13:59:02.831: INFO: >>> kubeConfig: /root/.kube/config Dec 17 13:59:03.202: INFO: Exec stderr: "" Dec 17 13:59:03.202: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2167 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 17 13:59:03.202: INFO: >>> kubeConfig: /root/.kube/config Dec 17 13:59:03.516: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Dec 17 13:59:03.516: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2167 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 17 13:59:03.516: INFO: >>> kubeConfig: /root/.kube/config Dec 17 13:59:04.008: INFO: Exec stderr: "" Dec 17 13:59:04.008: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2167 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 17 13:59:04.008: INFO: >>> kubeConfig: /root/.kube/config Dec 17 13:59:04.336: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Dec 17 13:59:04.336: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2167 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 17 13:59:04.336: INFO: >>> kubeConfig: /root/.kube/config Dec 17 13:59:04.897: INFO: Exec stderr: "" Dec 17 13:59:04.897: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2167 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 17 13:59:04.897: INFO: >>> kubeConfig: /root/.kube/config Dec 17 13:59:05.137: INFO: Exec stderr: "" Dec 17 13:59:05.137: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2167 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 17 13:59:05.137: INFO: >>> kubeConfig: /root/.kube/config Dec 17 13:59:05.418: INFO: Exec stderr: "" Dec 17 13:59:05.418: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2167 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 17 13:59:05.419: INFO: >>> kubeConfig: /root/.kube/config Dec 17 13:59:05.663: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 13:59:05.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-2167" for this suite. Dec 17 14:00:07.701: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 14:00:07.849: INFO: namespace e2e-kubelet-etc-hosts-2167 deletion completed in 1m2.173753568s • [SLOW TEST:88.166 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 14:00:07.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W1217 14:00:17.982513 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Dec 17 14:00:17.982: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 14:00:17.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1289" for this suite. Dec 17 14:00:24.011: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 14:00:24.083: INFO: namespace gc-1289 deletion completed in 6.098514704s • [SLOW TEST:16.232 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 14:00:24.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test hostPath mode Dec 17 14:00:24.197: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-3291" to be "success or failure" Dec 17 14:00:24.219: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 22.271337ms Dec 17 14:00:26.227: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029741101s Dec 17 14:00:28.250: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052704802s Dec 17 14:00:30.263: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066402838s Dec 17 14:00:32.270: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.073371423s Dec 17 14:00:34.278: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.081051022s Dec 17 14:00:36.284: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.087395733s STEP: Saw pod success Dec 17 14:00:36.285: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Dec 17 14:00:36.287: INFO: Trying to get logs from node iruya-node pod pod-host-path-test container test-container-1: STEP: delete the pod Dec 17 14:00:36.348: INFO: Waiting for pod pod-host-path-test to disappear Dec 17 14:00:36.357: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 14:00:36.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-3291" for this suite. Dec 17 14:00:42.439: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 14:00:42.661: INFO: namespace hostpath-3291 deletion completed in 6.296233696s • [SLOW TEST:18.577 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 14:00:42.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override command Dec 17 14:00:42.778: INFO: Waiting up to 5m0s for pod "client-containers-995e2ad9-5ca2-4fab-bcc2-0d3b65a7dbdc" in namespace "containers-4274" to be "success or failure" Dec 17 14:00:42.786: INFO: Pod "client-containers-995e2ad9-5ca2-4fab-bcc2-0d3b65a7dbdc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.355923ms Dec 17 14:00:44.796: INFO: Pod "client-containers-995e2ad9-5ca2-4fab-bcc2-0d3b65a7dbdc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017850399s Dec 17 14:00:46.804: INFO: Pod "client-containers-995e2ad9-5ca2-4fab-bcc2-0d3b65a7dbdc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025715085s Dec 17 14:00:48.812: INFO: Pod "client-containers-995e2ad9-5ca2-4fab-bcc2-0d3b65a7dbdc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034559711s Dec 17 14:00:50.909: INFO: Pod "client-containers-995e2ad9-5ca2-4fab-bcc2-0d3b65a7dbdc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.131531981s Dec 17 14:00:52.917: INFO: Pod "client-containers-995e2ad9-5ca2-4fab-bcc2-0d3b65a7dbdc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.139330342s STEP: Saw pod success Dec 17 14:00:52.917: INFO: Pod "client-containers-995e2ad9-5ca2-4fab-bcc2-0d3b65a7dbdc" satisfied condition "success or failure" Dec 17 14:00:52.922: INFO: Trying to get logs from node iruya-node pod client-containers-995e2ad9-5ca2-4fab-bcc2-0d3b65a7dbdc container test-container: STEP: delete the pod Dec 17 14:00:53.100: INFO: Waiting for pod client-containers-995e2ad9-5ca2-4fab-bcc2-0d3b65a7dbdc to disappear Dec 17 14:00:53.111: INFO: Pod client-containers-995e2ad9-5ca2-4fab-bcc2-0d3b65a7dbdc no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 14:00:53.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4274" for this suite. Dec 17 14:00:59.134: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 14:00:59.270: INFO: namespace containers-4274 deletion completed in 6.154493054s • [SLOW TEST:16.609 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 14:00:59.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-1078becc-da99-4e1d-8e22-fca404bbd65a STEP: Creating a pod to test consume secrets Dec 17 14:00:59.493: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-91e6f0a6-0e02-43fd-8b55-ed4aa0549ec1" in namespace "projected-1597" to be "success or failure" Dec 17 14:00:59.501: INFO: Pod "pod-projected-secrets-91e6f0a6-0e02-43fd-8b55-ed4aa0549ec1": Phase="Pending", Reason="", readiness=false. Elapsed: 7.276913ms Dec 17 14:01:01.695: INFO: Pod "pod-projected-secrets-91e6f0a6-0e02-43fd-8b55-ed4aa0549ec1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.202048855s Dec 17 14:01:03.734: INFO: Pod "pod-projected-secrets-91e6f0a6-0e02-43fd-8b55-ed4aa0549ec1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.241068219s Dec 17 14:01:05.744: INFO: Pod "pod-projected-secrets-91e6f0a6-0e02-43fd-8b55-ed4aa0549ec1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.250864267s Dec 17 14:01:07.778: INFO: Pod "pod-projected-secrets-91e6f0a6-0e02-43fd-8b55-ed4aa0549ec1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.285076293s Dec 17 14:01:09.790: INFO: Pod "pod-projected-secrets-91e6f0a6-0e02-43fd-8b55-ed4aa0549ec1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.297094666s STEP: Saw pod success Dec 17 14:01:09.791: INFO: Pod "pod-projected-secrets-91e6f0a6-0e02-43fd-8b55-ed4aa0549ec1" satisfied condition "success or failure" Dec 17 14:01:09.796: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-91e6f0a6-0e02-43fd-8b55-ed4aa0549ec1 container projected-secret-volume-test: STEP: delete the pod Dec 17 14:01:09.968: INFO: Waiting for pod pod-projected-secrets-91e6f0a6-0e02-43fd-8b55-ed4aa0549ec1 to disappear Dec 17 14:01:09.980: INFO: Pod pod-projected-secrets-91e6f0a6-0e02-43fd-8b55-ed4aa0549ec1 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 14:01:09.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1597" for this suite. Dec 17 14:01:16.037: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 14:01:16.208: INFO: namespace projected-1597 deletion completed in 6.220397661s • [SLOW TEST:16.937 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 14:01:16.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating replication controller svc-latency-rc in namespace svc-latency-9896 I1217 14:01:16.304625 8 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-9896, replica count: 1 I1217 14:01:17.355499 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1217 14:01:18.356011 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1217 14:01:19.357097 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1217 14:01:20.357718 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1217 14:01:21.358071 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1217 14:01:22.358482 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1217 14:01:23.359053 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1217 14:01:24.359689 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1217 14:01:25.360455 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1217 14:01:26.361296 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Dec 17 14:01:26.598: INFO: Created: latency-svc-wnxrp Dec 17 14:01:26.687: INFO: Got endpoints: latency-svc-wnxrp [225.720409ms] Dec 17 14:01:26.802: INFO: Created: latency-svc-mstnz Dec 17 14:01:26.959: INFO: Created: latency-svc-jvkvn Dec 17 14:01:26.959: INFO: Got endpoints: latency-svc-mstnz [270.406644ms] Dec 17 14:01:26.977: INFO: Got endpoints: latency-svc-jvkvn [287.418519ms] Dec 17 14:01:27.044: INFO: Created: latency-svc-kjq57 Dec 17 14:01:27.127: INFO: Got endpoints: latency-svc-kjq57 [437.455641ms] Dec 17 14:01:27.167: INFO: Created: latency-svc-9l7bc Dec 17 14:01:27.182: INFO: Got endpoints: latency-svc-9l7bc [493.149067ms] Dec 17 14:01:27.215: INFO: Created: latency-svc-m85mf Dec 17 14:01:27.377: INFO: Got endpoints: latency-svc-m85mf [688.075923ms] Dec 17 14:01:27.385: INFO: Created: latency-svc-9llsr Dec 17 14:01:27.394: INFO: Got endpoints: latency-svc-9llsr [704.90814ms] Dec 17 14:01:27.440: INFO: Created: latency-svc-mbxbc Dec 17 14:01:27.446: INFO: Got endpoints: latency-svc-mbxbc [756.538835ms] Dec 17 14:01:27.583: INFO: Created: latency-svc-sbl4c Dec 17 14:01:27.589: INFO: Got endpoints: latency-svc-sbl4c [899.862356ms] Dec 17 14:01:27.790: INFO: Created: latency-svc-zw7bz Dec 17 14:01:27.811: INFO: Got endpoints: latency-svc-zw7bz [1.122492475s] Dec 17 14:01:27.860: INFO: Created: latency-svc-x6wk5 Dec 17 14:01:27.965: INFO: Got endpoints: latency-svc-x6wk5 [1.276609983s] Dec 17 14:01:28.001: INFO: Created: latency-svc-rj9k5 Dec 17 14:01:28.013: INFO: Got endpoints: latency-svc-rj9k5 [1.323878941s] Dec 17 14:01:28.042: INFO: Created: latency-svc-hfrxj Dec 17 14:01:28.052: INFO: Got endpoints: latency-svc-hfrxj [1.362698162s] Dec 17 14:01:28.148: INFO: Created: latency-svc-nrcrx Dec 17 14:01:28.154: INFO: Got endpoints: latency-svc-nrcrx [1.465172327s] Dec 17 14:01:28.191: INFO: Created: latency-svc-lgm6b Dec 17 14:01:28.203: INFO: Got endpoints: latency-svc-lgm6b [1.514029737s] Dec 17 14:01:28.339: INFO: Created: latency-svc-v8dt9 Dec 17 14:01:28.340: INFO: Got endpoints: latency-svc-v8dt9 [1.651121959s] Dec 17 14:01:28.397: INFO: Created: latency-svc-ls9jt Dec 17 14:01:28.405: INFO: Got endpoints: latency-svc-ls9jt [1.445005806s] Dec 17 14:01:28.507: INFO: Created: latency-svc-d48fh Dec 17 14:01:28.541: INFO: Got endpoints: latency-svc-d48fh [1.564047569s] Dec 17 14:01:28.549: INFO: Created: latency-svc-2qgkn Dec 17 14:01:28.557: INFO: Got endpoints: latency-svc-2qgkn [1.429419776s] Dec 17 14:01:28.599: INFO: Created: latency-svc-82n4f Dec 17 14:01:28.600: INFO: Got endpoints: latency-svc-82n4f [1.416779819s] Dec 17 14:01:28.717: INFO: Created: latency-svc-fz969 Dec 17 14:01:28.728: INFO: Got endpoints: latency-svc-fz969 [1.350312216s] Dec 17 14:01:28.764: INFO: Created: latency-svc-dwwl8 Dec 17 14:01:28.770: INFO: Got endpoints: latency-svc-dwwl8 [1.376153818s] Dec 17 14:01:28.943: INFO: Created: latency-svc-hcwqm Dec 17 14:01:28.983: INFO: Created: latency-svc-mbjzs Dec 17 14:01:28.985: INFO: Got endpoints: latency-svc-hcwqm [1.539062438s] Dec 17 14:01:28.993: INFO: Got endpoints: latency-svc-mbjzs [1.403887253s] Dec 17 14:01:29.233: INFO: Created: latency-svc-2nzkv Dec 17 14:01:29.271: INFO: Got endpoints: latency-svc-2nzkv [1.459471973s] Dec 17 14:01:29.490: INFO: Created: latency-svc-5t54t Dec 17 14:01:29.495: INFO: Got endpoints: latency-svc-5t54t [1.53027737s] Dec 17 14:01:29.687: INFO: Created: latency-svc-tssk5 Dec 17 14:01:29.687: INFO: Got endpoints: latency-svc-tssk5 [1.673712304s] Dec 17 14:01:29.775: INFO: Created: latency-svc-ggrtw Dec 17 14:01:29.776: INFO: Got endpoints: latency-svc-ggrtw [1.723496714s] Dec 17 14:01:29.941: INFO: Created: latency-svc-gclxs Dec 17 14:01:30.135: INFO: Got endpoints: latency-svc-gclxs [1.980645506s] Dec 17 14:01:30.145: INFO: Created: latency-svc-b5jcm Dec 17 14:01:30.151: INFO: Got endpoints: latency-svc-b5jcm [1.947450437s] Dec 17 14:01:30.222: INFO: Created: latency-svc-tcvx5 Dec 17 14:01:30.245: INFO: Got endpoints: latency-svc-tcvx5 [1.904546774s] Dec 17 14:01:30.405: INFO: Created: latency-svc-shcw2 Dec 17 14:01:30.419: INFO: Got endpoints: latency-svc-shcw2 [2.013743629s] Dec 17 14:01:30.554: INFO: Created: latency-svc-nphj5 Dec 17 14:01:30.597: INFO: Got endpoints: latency-svc-nphj5 [2.05486677s] Dec 17 14:01:30.606: INFO: Created: latency-svc-gqktd Dec 17 14:01:30.618: INFO: Got endpoints: latency-svc-gqktd [2.060762591s] Dec 17 14:01:30.764: INFO: Created: latency-svc-qc4h5 Dec 17 14:01:30.936: INFO: Got endpoints: latency-svc-qc4h5 [2.335812364s] Dec 17 14:01:31.175: INFO: Created: latency-svc-2s2hl Dec 17 14:01:31.209: INFO: Got endpoints: latency-svc-2s2hl [2.47981406s] Dec 17 14:01:31.350: INFO: Created: latency-svc-wkzjp Dec 17 14:01:31.379: INFO: Got endpoints: latency-svc-wkzjp [2.608557083s] Dec 17 14:01:31.544: INFO: Created: latency-svc-qwhnb Dec 17 14:01:31.566: INFO: Got endpoints: latency-svc-qwhnb [2.581083453s] Dec 17 14:01:31.637: INFO: Created: latency-svc-cxwtz Dec 17 14:01:31.638: INFO: Got endpoints: latency-svc-cxwtz [2.644278987s] Dec 17 14:01:31.826: INFO: Created: latency-svc-tjvn4 Dec 17 14:01:31.827: INFO: Got endpoints: latency-svc-tjvn4 [2.555390067s] Dec 17 14:01:31.929: INFO: Created: latency-svc-4dvqx Dec 17 14:01:31.949: INFO: Got endpoints: latency-svc-4dvqx [2.453797445s] Dec 17 14:01:32.026: INFO: Created: latency-svc-8ltpc Dec 17 14:01:32.095: INFO: Got endpoints: latency-svc-8ltpc [2.408083526s] Dec 17 14:01:32.135: INFO: Created: latency-svc-72ktx Dec 17 14:01:32.238: INFO: Got endpoints: latency-svc-72ktx [2.462162736s] Dec 17 14:01:32.243: INFO: Created: latency-svc-t8vt9 Dec 17 14:01:32.267: INFO: Got endpoints: latency-svc-t8vt9 [2.131468487s] Dec 17 14:01:32.432: INFO: Created: latency-svc-hn57k Dec 17 14:01:32.443: INFO: Got endpoints: latency-svc-hn57k [2.291657s] Dec 17 14:01:32.492: INFO: Created: latency-svc-9qhlk Dec 17 14:01:32.595: INFO: Got endpoints: latency-svc-9qhlk [2.349966206s] Dec 17 14:01:32.648: INFO: Created: latency-svc-gxrmq Dec 17 14:01:32.658: INFO: Got endpoints: latency-svc-gxrmq [2.238330405s] Dec 17 14:01:32.798: INFO: Created: latency-svc-5c8qq Dec 17 14:01:32.820: INFO: Got endpoints: latency-svc-5c8qq [2.222985068s] Dec 17 14:01:32.883: INFO: Created: latency-svc-bns7d Dec 17 14:01:32.968: INFO: Got endpoints: latency-svc-bns7d [2.350100305s] Dec 17 14:01:33.002: INFO: Created: latency-svc-54wxc Dec 17 14:01:33.013: INFO: Got endpoints: latency-svc-54wxc [2.076358724s] Dec 17 14:01:33.182: INFO: Created: latency-svc-lbc4j Dec 17 14:01:33.183: INFO: Got endpoints: latency-svc-lbc4j [214.175369ms] Dec 17 14:01:33.258: INFO: Created: latency-svc-dnlm2 Dec 17 14:01:33.321: INFO: Got endpoints: latency-svc-dnlm2 [2.11126815s] Dec 17 14:01:33.362: INFO: Created: latency-svc-wh557 Dec 17 14:01:33.366: INFO: Got endpoints: latency-svc-wh557 [1.98576422s] Dec 17 14:01:33.498: INFO: Created: latency-svc-8p5qv Dec 17 14:01:33.525: INFO: Got endpoints: latency-svc-8p5qv [1.958231285s] Dec 17 14:01:33.576: INFO: Created: latency-svc-hzjk4 Dec 17 14:01:33.594: INFO: Got endpoints: latency-svc-hzjk4 [1.955331539s] Dec 17 14:01:33.702: INFO: Created: latency-svc-lckr6 Dec 17 14:01:33.702: INFO: Got endpoints: latency-svc-lckr6 [1.875574164s] Dec 17 14:01:33.754: INFO: Created: latency-svc-b8tmn Dec 17 14:01:33.759: INFO: Got endpoints: latency-svc-b8tmn [1.808788193s] Dec 17 14:01:33.906: INFO: Created: latency-svc-85jxz Dec 17 14:01:33.928: INFO: Got endpoints: latency-svc-85jxz [1.832604519s] Dec 17 14:01:33.987: INFO: Created: latency-svc-6qx29 Dec 17 14:01:34.070: INFO: Got endpoints: latency-svc-6qx29 [1.83100819s] Dec 17 14:01:34.071: INFO: Created: latency-svc-8zqdz Dec 17 14:01:34.111: INFO: Got endpoints: latency-svc-8zqdz [1.843884206s] Dec 17 14:01:34.227: INFO: Created: latency-svc-fp6zx Dec 17 14:01:34.235: INFO: Got endpoints: latency-svc-fp6zx [1.791790969s] Dec 17 14:01:34.268: INFO: Created: latency-svc-6jhhp Dec 17 14:01:34.392: INFO: Created: latency-svc-9hpt6 Dec 17 14:01:34.392: INFO: Got endpoints: latency-svc-6jhhp [1.796255224s] Dec 17 14:01:34.411: INFO: Got endpoints: latency-svc-9hpt6 [1.753029581s] Dec 17 14:01:34.617: INFO: Created: latency-svc-bz26v Dec 17 14:01:34.668: INFO: Created: latency-svc-pr2np Dec 17 14:01:34.668: INFO: Got endpoints: latency-svc-bz26v [1.847851831s] Dec 17 14:01:34.697: INFO: Got endpoints: latency-svc-pr2np [1.68402769s] Dec 17 14:01:34.859: INFO: Created: latency-svc-528g7 Dec 17 14:01:34.876: INFO: Got endpoints: latency-svc-528g7 [1.692602805s] Dec 17 14:01:34.979: INFO: Created: latency-svc-vt6rl Dec 17 14:01:35.004: INFO: Got endpoints: latency-svc-vt6rl [1.682716951s] Dec 17 14:01:35.128: INFO: Created: latency-svc-v4vl8 Dec 17 14:01:35.141: INFO: Got endpoints: latency-svc-v4vl8 [1.775556066s] Dec 17 14:01:35.206: INFO: Created: latency-svc-q527n Dec 17 14:01:35.286: INFO: Got endpoints: latency-svc-q527n [1.759529783s] Dec 17 14:01:35.327: INFO: Created: latency-svc-4n68g Dec 17 14:01:35.339: INFO: Got endpoints: latency-svc-4n68g [1.74543415s] Dec 17 14:01:35.373: INFO: Created: latency-svc-f4j2d Dec 17 14:01:35.373: INFO: Got endpoints: latency-svc-f4j2d [1.670507541s] Dec 17 14:01:35.564: INFO: Created: latency-svc-vlkzm Dec 17 14:01:35.569: INFO: Got endpoints: latency-svc-vlkzm [1.810021932s] Dec 17 14:01:35.619: INFO: Created: latency-svc-k4977 Dec 17 14:01:35.632: INFO: Got endpoints: latency-svc-k4977 [1.703302284s] Dec 17 14:01:35.791: INFO: Created: latency-svc-cbdqd Dec 17 14:01:35.892: INFO: Got endpoints: latency-svc-cbdqd [1.821343618s] Dec 17 14:01:35.932: INFO: Created: latency-svc-vbs4q Dec 17 14:01:35.938: INFO: Got endpoints: latency-svc-vbs4q [1.826166757s] Dec 17 14:01:36.068: INFO: Created: latency-svc-fsptm Dec 17 14:01:36.075: INFO: Got endpoints: latency-svc-fsptm [1.839951247s] Dec 17 14:01:36.115: INFO: Created: latency-svc-skl76 Dec 17 14:01:36.129: INFO: Got endpoints: latency-svc-skl76 [1.736757722s] Dec 17 14:01:36.249: INFO: Created: latency-svc-gs255 Dec 17 14:01:36.253: INFO: Got endpoints: latency-svc-gs255 [1.841746557s] Dec 17 14:01:36.295: INFO: Created: latency-svc-52w8c Dec 17 14:01:36.302: INFO: Got endpoints: latency-svc-52w8c [1.633029903s] Dec 17 14:01:36.348: INFO: Created: latency-svc-jsc2q Dec 17 14:01:36.450: INFO: Got endpoints: latency-svc-jsc2q [1.752873426s] Dec 17 14:01:36.450: INFO: Created: latency-svc-zbglw Dec 17 14:01:36.501: INFO: Got endpoints: latency-svc-zbglw [1.624843085s] Dec 17 14:01:36.507: INFO: Created: latency-svc-5plxj Dec 17 14:01:36.660: INFO: Got endpoints: latency-svc-5plxj [1.655419902s] Dec 17 14:01:36.713: INFO: Created: latency-svc-rhp5l Dec 17 14:01:36.914: INFO: Got endpoints: latency-svc-rhp5l [1.772028228s] Dec 17 14:01:36.928: INFO: Created: latency-svc-gjn75 Dec 17 14:01:36.928: INFO: Got endpoints: latency-svc-gjn75 [1.642361078s] Dec 17 14:01:36.982: INFO: Created: latency-svc-ztmtq Dec 17 14:01:36.991: INFO: Got endpoints: latency-svc-ztmtq [1.651474488s] Dec 17 14:01:37.106: INFO: Created: latency-svc-7qk2z Dec 17 14:01:37.111: INFO: Got endpoints: latency-svc-7qk2z [1.738467996s] Dec 17 14:01:37.181: INFO: Created: latency-svc-7p8kp Dec 17 14:01:37.185: INFO: Got endpoints: latency-svc-7p8kp [1.61626965s] Dec 17 14:01:37.322: INFO: Created: latency-svc-d5ggm Dec 17 14:01:37.322: INFO: Got endpoints: latency-svc-d5ggm [1.690328203s] Dec 17 14:01:37.379: INFO: Created: latency-svc-lr6kk Dec 17 14:01:37.471: INFO: Got endpoints: latency-svc-lr6kk [1.578600584s] Dec 17 14:01:37.476: INFO: Created: latency-svc-j6k6k Dec 17 14:01:37.520: INFO: Got endpoints: latency-svc-j6k6k [1.581623945s] Dec 17 14:01:37.563: INFO: Created: latency-svc-bjwr4 Dec 17 14:01:37.671: INFO: Got endpoints: latency-svc-bjwr4 [1.595657815s] Dec 17 14:01:37.672: INFO: Created: latency-svc-hrzj5 Dec 17 14:01:37.695: INFO: Got endpoints: latency-svc-hrzj5 [1.564806655s] Dec 17 14:01:37.754: INFO: Created: latency-svc-dqlw5 Dec 17 14:01:37.759: INFO: Got endpoints: latency-svc-dqlw5 [1.505799917s] Dec 17 14:01:37.956: INFO: Created: latency-svc-zbx79 Dec 17 14:01:37.958: INFO: Got endpoints: latency-svc-zbx79 [1.656363581s] Dec 17 14:01:38.161: INFO: Created: latency-svc-jvw99 Dec 17 14:01:38.186: INFO: Got endpoints: latency-svc-jvw99 [1.73585062s] Dec 17 14:01:38.337: INFO: Created: latency-svc-pcw9r Dec 17 14:01:38.338: INFO: Got endpoints: latency-svc-pcw9r [1.83664085s] Dec 17 14:01:38.407: INFO: Created: latency-svc-4knpk Dec 17 14:01:38.428: INFO: Got endpoints: latency-svc-4knpk [1.767560325s] Dec 17 14:01:38.565: INFO: Created: latency-svc-qx8fp Dec 17 14:01:38.590: INFO: Got endpoints: latency-svc-qx8fp [1.675424538s] Dec 17 14:01:38.713: INFO: Created: latency-svc-9scdk Dec 17 14:01:38.724: INFO: Got endpoints: latency-svc-9scdk [1.795925307s] Dec 17 14:01:38.766: INFO: Created: latency-svc-22v4p Dec 17 14:01:38.770: INFO: Got endpoints: latency-svc-22v4p [1.778610002s] Dec 17 14:01:38.832: INFO: Created: latency-svc-gp27k Dec 17 14:01:38.972: INFO: Got endpoints: latency-svc-gp27k [1.860887249s] Dec 17 14:01:38.983: INFO: Created: latency-svc-l9f2r Dec 17 14:01:38.987: INFO: Got endpoints: latency-svc-l9f2r [1.801186173s] Dec 17 14:01:39.049: INFO: Created: latency-svc-mplw9 Dec 17 14:01:39.065: INFO: Got endpoints: latency-svc-mplw9 [1.74269377s] Dec 17 14:01:39.169: INFO: Created: latency-svc-j9hx8 Dec 17 14:01:39.172: INFO: Got endpoints: latency-svc-j9hx8 [1.701186102s] Dec 17 14:01:39.210: INFO: Created: latency-svc-przsn Dec 17 14:01:39.220: INFO: Got endpoints: latency-svc-przsn [1.699587136s] Dec 17 14:01:39.352: INFO: Created: latency-svc-s2fbm Dec 17 14:01:39.403: INFO: Got endpoints: latency-svc-s2fbm [1.731159688s] Dec 17 14:01:39.444: INFO: Created: latency-svc-5tpjw Dec 17 14:01:39.562: INFO: Got endpoints: latency-svc-5tpjw [1.866246861s] Dec 17 14:01:39.568: INFO: Created: latency-svc-sf2c5 Dec 17 14:01:39.575: INFO: Got endpoints: latency-svc-sf2c5 [1.815524841s] Dec 17 14:01:39.616: INFO: Created: latency-svc-z94hq Dec 17 14:01:39.617: INFO: Got endpoints: latency-svc-z94hq [1.658704318s] Dec 17 14:01:39.647: INFO: Created: latency-svc-rhxq4 Dec 17 14:01:39.654: INFO: Got endpoints: latency-svc-rhxq4 [1.467301874s] Dec 17 14:01:39.809: INFO: Created: latency-svc-5drhp Dec 17 14:01:39.874: INFO: Created: latency-svc-497b6 Dec 17 14:01:39.880: INFO: Got endpoints: latency-svc-5drhp [1.54213408s] Dec 17 14:01:39.994: INFO: Got endpoints: latency-svc-497b6 [1.56567381s] Dec 17 14:01:40.006: INFO: Created: latency-svc-6l5rb Dec 17 14:01:40.013: INFO: Got endpoints: latency-svc-6l5rb [1.422768067s] Dec 17 14:01:40.051: INFO: Created: latency-svc-cfm2m Dec 17 14:01:40.077: INFO: Got endpoints: latency-svc-cfm2m [1.353047341s] Dec 17 14:01:40.085: INFO: Created: latency-svc-drrnp Dec 17 14:01:40.221: INFO: Got endpoints: latency-svc-drrnp [1.451476193s] Dec 17 14:01:40.235: INFO: Created: latency-svc-c89gw Dec 17 14:01:40.239: INFO: Got endpoints: latency-svc-c89gw [1.266310977s] Dec 17 14:01:40.285: INFO: Created: latency-svc-d5l57 Dec 17 14:01:40.289: INFO: Got endpoints: latency-svc-d5l57 [1.30220382s] Dec 17 14:01:40.318: INFO: Created: latency-svc-89mbf Dec 17 14:01:40.478: INFO: Got endpoints: latency-svc-89mbf [1.412864542s] Dec 17 14:01:40.487: INFO: Created: latency-svc-bjgl6 Dec 17 14:01:40.502: INFO: Got endpoints: latency-svc-bjgl6 [1.329624617s] Dec 17 14:01:40.531: INFO: Created: latency-svc-6476s Dec 17 14:01:40.560: INFO: Got endpoints: latency-svc-6476s [1.340604631s] Dec 17 14:01:40.703: INFO: Created: latency-svc-h86md Dec 17 14:01:40.718: INFO: Got endpoints: latency-svc-h86md [1.314769537s] Dec 17 14:01:40.772: INFO: Created: latency-svc-m7fk4 Dec 17 14:01:40.784: INFO: Got endpoints: latency-svc-m7fk4 [1.221737494s] Dec 17 14:01:40.951: INFO: Created: latency-svc-lxsfx Dec 17 14:01:40.960: INFO: Got endpoints: latency-svc-lxsfx [1.385289499s] Dec 17 14:01:41.013: INFO: Created: latency-svc-rkprs Dec 17 14:01:41.014: INFO: Got endpoints: latency-svc-rkprs [1.397272569s] Dec 17 14:01:41.236: INFO: Created: latency-svc-9z2vc Dec 17 14:01:41.243: INFO: Got endpoints: latency-svc-9z2vc [1.589137023s] Dec 17 14:01:41.297: INFO: Created: latency-svc-q48x6 Dec 17 14:01:41.390: INFO: Got endpoints: latency-svc-q48x6 [1.510005144s] Dec 17 14:01:41.415: INFO: Created: latency-svc-hs2w6 Dec 17 14:01:41.416: INFO: Got endpoints: latency-svc-hs2w6 [1.42175625s] Dec 17 14:01:41.465: INFO: Created: latency-svc-qtb4r Dec 17 14:01:41.481: INFO: Got endpoints: latency-svc-qtb4r [1.466987514s] Dec 17 14:01:41.575: INFO: Created: latency-svc-g4md2 Dec 17 14:01:41.582: INFO: Got endpoints: latency-svc-g4md2 [1.50447755s] Dec 17 14:01:41.632: INFO: Created: latency-svc-j7fg5 Dec 17 14:01:41.632: INFO: Got endpoints: latency-svc-j7fg5 [1.409875002s] Dec 17 14:01:41.773: INFO: Created: latency-svc-z8mrq Dec 17 14:01:41.796: INFO: Got endpoints: latency-svc-z8mrq [1.557071238s] Dec 17 14:01:41.858: INFO: Created: latency-svc-9fnz9 Dec 17 14:01:41.869: INFO: Got endpoints: latency-svc-9fnz9 [1.580046206s] Dec 17 14:01:42.052: INFO: Created: latency-svc-qs2rz Dec 17 14:01:42.074: INFO: Got endpoints: latency-svc-qs2rz [1.595127053s] Dec 17 14:01:42.099: INFO: Created: latency-svc-x5vxb Dec 17 14:01:42.107: INFO: Got endpoints: latency-svc-x5vxb [1.604154156s] Dec 17 14:01:42.237: INFO: Created: latency-svc-4bktt Dec 17 14:01:42.256: INFO: Got endpoints: latency-svc-4bktt [1.694679868s] Dec 17 14:01:42.291: INFO: Created: latency-svc-bn6bc Dec 17 14:01:42.314: INFO: Got endpoints: latency-svc-bn6bc [1.595677045s] Dec 17 14:01:42.444: INFO: Created: latency-svc-fljtw Dec 17 14:01:42.461: INFO: Got endpoints: latency-svc-fljtw [1.676559328s] Dec 17 14:01:42.517: INFO: Created: latency-svc-p97ql Dec 17 14:01:42.539: INFO: Got endpoints: latency-svc-p97ql [1.578322514s] Dec 17 14:01:42.651: INFO: Created: latency-svc-qskz9 Dec 17 14:01:42.664: INFO: Got endpoints: latency-svc-qskz9 [1.649625782s] Dec 17 14:01:42.702: INFO: Created: latency-svc-wc595 Dec 17 14:01:42.745: INFO: Got endpoints: latency-svc-wc595 [1.501609585s] Dec 17 14:01:42.773: INFO: Created: latency-svc-hkpch Dec 17 14:01:42.903: INFO: Got endpoints: latency-svc-hkpch [1.511840822s] Dec 17 14:01:42.913: INFO: Created: latency-svc-v9465 Dec 17 14:01:42.921: INFO: Got endpoints: latency-svc-v9465 [1.504679447s] Dec 17 14:01:43.002: INFO: Created: latency-svc-7vzbg Dec 17 14:01:43.190: INFO: Got endpoints: latency-svc-7vzbg [1.708940555s] Dec 17 14:01:43.197: INFO: Created: latency-svc-4trxq Dec 17 14:01:43.215: INFO: Got endpoints: latency-svc-4trxq [1.632440546s] Dec 17 14:01:43.296: INFO: Created: latency-svc-ngrkl Dec 17 14:01:43.408: INFO: Got endpoints: latency-svc-ngrkl [1.776409625s] Dec 17 14:01:43.428: INFO: Created: latency-svc-b2g7h Dec 17 14:01:43.437: INFO: Got endpoints: latency-svc-b2g7h [1.640388589s] Dec 17 14:01:43.478: INFO: Created: latency-svc-4q84s Dec 17 14:01:43.481: INFO: Got endpoints: latency-svc-4q84s [1.610952975s] Dec 17 14:01:43.610: INFO: Created: latency-svc-z6ft4 Dec 17 14:01:43.624: INFO: Got endpoints: latency-svc-z6ft4 [1.549992897s] Dec 17 14:01:43.666: INFO: Created: latency-svc-8jx7w Dec 17 14:01:43.673: INFO: Got endpoints: latency-svc-8jx7w [1.565581089s] Dec 17 14:01:43.791: INFO: Created: latency-svc-p2bwk Dec 17 14:01:43.815: INFO: Got endpoints: latency-svc-p2bwk [1.558658463s] Dec 17 14:01:43.887: INFO: Created: latency-svc-96pgz Dec 17 14:01:43.970: INFO: Got endpoints: latency-svc-96pgz [1.655015516s] Dec 17 14:01:44.001: INFO: Created: latency-svc-fglxc Dec 17 14:01:44.063: INFO: Got endpoints: latency-svc-fglxc [1.602166377s] Dec 17 14:01:44.067: INFO: Created: latency-svc-5zm9r Dec 17 14:01:44.141: INFO: Got endpoints: latency-svc-5zm9r [1.601873814s] Dec 17 14:01:44.179: INFO: Created: latency-svc-kzvhx Dec 17 14:01:44.191: INFO: Got endpoints: latency-svc-kzvhx [1.526231875s] Dec 17 14:01:44.218: INFO: Created: latency-svc-82ddp Dec 17 14:01:44.236: INFO: Got endpoints: latency-svc-82ddp [1.490071892s] Dec 17 14:01:44.338: INFO: Created: latency-svc-5zwzr Dec 17 14:01:44.373: INFO: Created: latency-svc-z4ksd Dec 17 14:01:44.381: INFO: Got endpoints: latency-svc-5zwzr [1.476846058s] Dec 17 14:01:44.389: INFO: Got endpoints: latency-svc-z4ksd [1.467211545s] Dec 17 14:01:44.501: INFO: Created: latency-svc-p4zw5 Dec 17 14:01:44.503: INFO: Got endpoints: latency-svc-p4zw5 [1.312368739s] Dec 17 14:01:44.558: INFO: Created: latency-svc-z4b84 Dec 17 14:01:44.565: INFO: Got endpoints: latency-svc-z4b84 [1.349654542s] Dec 17 14:01:44.661: INFO: Created: latency-svc-tslmn Dec 17 14:01:44.665: INFO: Got endpoints: latency-svc-tslmn [1.255497961s] Dec 17 14:01:44.706: INFO: Created: latency-svc-6twhs Dec 17 14:01:44.710: INFO: Got endpoints: latency-svc-6twhs [1.272299834s] Dec 17 14:01:44.766: INFO: Created: latency-svc-wh6xw Dec 17 14:01:44.910: INFO: Got endpoints: latency-svc-wh6xw [1.429451528s] Dec 17 14:01:44.941: INFO: Created: latency-svc-qdbls Dec 17 14:01:44.951: INFO: Got endpoints: latency-svc-qdbls [1.326270465s] Dec 17 14:01:44.997: INFO: Created: latency-svc-8b5zd Dec 17 14:01:45.006: INFO: Got endpoints: latency-svc-8b5zd [1.332417583s] Dec 17 14:01:45.129: INFO: Created: latency-svc-tfcf5 Dec 17 14:01:45.140: INFO: Got endpoints: latency-svc-tfcf5 [1.324490786s] Dec 17 14:01:45.172: INFO: Created: latency-svc-l6fqc Dec 17 14:01:45.174: INFO: Got endpoints: latency-svc-l6fqc [1.203842348s] Dec 17 14:01:45.273: INFO: Created: latency-svc-brwqq Dec 17 14:01:45.312: INFO: Got endpoints: latency-svc-brwqq [1.248077275s] Dec 17 14:01:45.367: INFO: Created: latency-svc-62pnl Dec 17 14:01:45.437: INFO: Got endpoints: latency-svc-62pnl [1.29530887s] Dec 17 14:01:45.461: INFO: Created: latency-svc-l274p Dec 17 14:01:45.486: INFO: Got endpoints: latency-svc-l274p [1.294576064s] Dec 17 14:01:45.614: INFO: Created: latency-svc-k8pdv Dec 17 14:01:45.619: INFO: Got endpoints: latency-svc-k8pdv [1.382266344s] Dec 17 14:01:45.668: INFO: Created: latency-svc-hw5wh Dec 17 14:01:45.672: INFO: Got endpoints: latency-svc-hw5wh [1.290871202s] Dec 17 14:01:45.773: INFO: Created: latency-svc-ft9kf Dec 17 14:01:45.783: INFO: Got endpoints: latency-svc-ft9kf [1.394024117s] Dec 17 14:01:45.940: INFO: Created: latency-svc-t6c7n Dec 17 14:01:45.946: INFO: Got endpoints: latency-svc-t6c7n [1.443580254s] Dec 17 14:01:45.994: INFO: Created: latency-svc-7m46f Dec 17 14:01:46.049: INFO: Got endpoints: latency-svc-7m46f [1.483721077s] Dec 17 14:01:46.233: INFO: Created: latency-svc-qzzs6 Dec 17 14:01:46.267: INFO: Got endpoints: latency-svc-qzzs6 [1.602529154s] Dec 17 14:01:46.385: INFO: Created: latency-svc-4qgkr Dec 17 14:01:46.420: INFO: Got endpoints: latency-svc-4qgkr [1.710487211s] Dec 17 14:01:46.423: INFO: Created: latency-svc-gn65g Dec 17 14:01:46.445: INFO: Got endpoints: latency-svc-gn65g [1.534834468s] Dec 17 14:01:46.470: INFO: Created: latency-svc-dws4p Dec 17 14:01:46.571: INFO: Got endpoints: latency-svc-dws4p [1.619757496s] Dec 17 14:01:46.596: INFO: Created: latency-svc-hmskv Dec 17 14:01:46.623: INFO: Got endpoints: latency-svc-hmskv [1.616779692s] Dec 17 14:01:46.626: INFO: Created: latency-svc-xptbt Dec 17 14:01:46.630: INFO: Got endpoints: latency-svc-xptbt [1.489985154s] Dec 17 14:01:46.662: INFO: Created: latency-svc-2s9z6 Dec 17 14:01:46.783: INFO: Got endpoints: latency-svc-2s9z6 [1.608102571s] Dec 17 14:01:46.808: INFO: Created: latency-svc-97fdn Dec 17 14:01:46.874: INFO: Got endpoints: latency-svc-97fdn [1.561004844s] Dec 17 14:01:46.879: INFO: Created: latency-svc-4nmls Dec 17 14:01:46.979: INFO: Got endpoints: latency-svc-4nmls [1.540964792s] Dec 17 14:01:47.008: INFO: Created: latency-svc-85dpg Dec 17 14:01:47.013: INFO: Got endpoints: latency-svc-85dpg [1.526083522s] Dec 17 14:01:47.054: INFO: Created: latency-svc-4tqh9 Dec 17 14:01:47.061: INFO: Got endpoints: latency-svc-4tqh9 [1.441568662s] Dec 17 14:01:47.216: INFO: Created: latency-svc-7vrkn Dec 17 14:01:47.230: INFO: Got endpoints: latency-svc-7vrkn [1.558207823s] Dec 17 14:01:47.279: INFO: Created: latency-svc-cs97b Dec 17 14:01:47.299: INFO: Got endpoints: latency-svc-cs97b [1.515139505s] Dec 17 14:01:47.380: INFO: Created: latency-svc-xbg4g Dec 17 14:01:47.386: INFO: Got endpoints: latency-svc-xbg4g [1.439580726s] Dec 17 14:01:47.433: INFO: Created: latency-svc-g7zzh Dec 17 14:01:47.460: INFO: Got endpoints: latency-svc-g7zzh [1.411019859s] Dec 17 14:01:47.468: INFO: Created: latency-svc-zv8cn Dec 17 14:01:47.469: INFO: Got endpoints: latency-svc-zv8cn [1.201696581s] Dec 17 14:01:47.579: INFO: Created: latency-svc-8xcq7 Dec 17 14:01:47.582: INFO: Got endpoints: latency-svc-8xcq7 [1.161590002s] Dec 17 14:01:47.649: INFO: Created: latency-svc-hpk2j Dec 17 14:01:47.744: INFO: Got endpoints: latency-svc-hpk2j [1.298065308s] Dec 17 14:01:47.783: INFO: Created: latency-svc-djnn4 Dec 17 14:01:47.797: INFO: Got endpoints: latency-svc-djnn4 [1.225907648s] Dec 17 14:01:47.835: INFO: Created: latency-svc-g645c Dec 17 14:01:47.950: INFO: Got endpoints: latency-svc-g645c [1.327302613s] Dec 17 14:01:47.954: INFO: Created: latency-svc-6zlb7 Dec 17 14:01:47.970: INFO: Got endpoints: latency-svc-6zlb7 [1.340128592s] Dec 17 14:01:47.994: INFO: Created: latency-svc-mk8kl Dec 17 14:01:48.004: INFO: Got endpoints: latency-svc-mk8kl [1.221310155s] Dec 17 14:01:48.030: INFO: Created: latency-svc-zl6cb Dec 17 14:01:48.169: INFO: Got endpoints: latency-svc-zl6cb [1.294974948s] Dec 17 14:01:48.215: INFO: Created: latency-svc-hgsfw Dec 17 14:01:48.217: INFO: Got endpoints: latency-svc-hgsfw [1.237897331s] Dec 17 14:01:48.382: INFO: Created: latency-svc-k7wsh Dec 17 14:01:48.387: INFO: Created: latency-svc-tzxwv Dec 17 14:01:48.403: INFO: Got endpoints: latency-svc-tzxwv [1.342113078s] Dec 17 14:01:48.403: INFO: Got endpoints: latency-svc-k7wsh [1.390001911s] Dec 17 14:01:48.470: INFO: Created: latency-svc-9vfpj Dec 17 14:01:48.561: INFO: Got endpoints: latency-svc-9vfpj [1.33084462s] Dec 17 14:01:48.562: INFO: Latencies: [214.175369ms 270.406644ms 287.418519ms 437.455641ms 493.149067ms 688.075923ms 704.90814ms 756.538835ms 899.862356ms 1.122492475s 1.161590002s 1.201696581s 1.203842348s 1.221310155s 1.221737494s 1.225907648s 1.237897331s 1.248077275s 1.255497961s 1.266310977s 1.272299834s 1.276609983s 1.290871202s 1.294576064s 1.294974948s 1.29530887s 1.298065308s 1.30220382s 1.312368739s 1.314769537s 1.323878941s 1.324490786s 1.326270465s 1.327302613s 1.329624617s 1.33084462s 1.332417583s 1.340128592s 1.340604631s 1.342113078s 1.349654542s 1.350312216s 1.353047341s 1.362698162s 1.376153818s 1.382266344s 1.385289499s 1.390001911s 1.394024117s 1.397272569s 1.403887253s 1.409875002s 1.411019859s 1.412864542s 1.416779819s 1.42175625s 1.422768067s 1.429419776s 1.429451528s 1.439580726s 1.441568662s 1.443580254s 1.445005806s 1.451476193s 1.459471973s 1.465172327s 1.466987514s 1.467211545s 1.467301874s 1.476846058s 1.483721077s 1.489985154s 1.490071892s 1.501609585s 1.50447755s 1.504679447s 1.505799917s 1.510005144s 1.511840822s 1.514029737s 1.515139505s 1.526083522s 1.526231875s 1.53027737s 1.534834468s 1.539062438s 1.540964792s 1.54213408s 1.549992897s 1.557071238s 1.558207823s 1.558658463s 1.561004844s 1.564047569s 1.564806655s 1.565581089s 1.56567381s 1.578322514s 1.578600584s 1.580046206s 1.581623945s 1.589137023s 1.595127053s 1.595657815s 1.595677045s 1.601873814s 1.602166377s 1.602529154s 1.604154156s 1.608102571s 1.610952975s 1.61626965s 1.616779692s 1.619757496s 1.624843085s 1.632440546s 1.633029903s 1.640388589s 1.642361078s 1.649625782s 1.651121959s 1.651474488s 1.655015516s 1.655419902s 1.656363581s 1.658704318s 1.670507541s 1.673712304s 1.675424538s 1.676559328s 1.682716951s 1.68402769s 1.690328203s 1.692602805s 1.694679868s 1.699587136s 1.701186102s 1.703302284s 1.708940555s 1.710487211s 1.723496714s 1.731159688s 1.73585062s 1.736757722s 1.738467996s 1.74269377s 1.74543415s 1.752873426s 1.753029581s 1.759529783s 1.767560325s 1.772028228s 1.775556066s 1.776409625s 1.778610002s 1.791790969s 1.795925307s 1.796255224s 1.801186173s 1.808788193s 1.810021932s 1.815524841s 1.821343618s 1.826166757s 1.83100819s 1.832604519s 1.83664085s 1.839951247s 1.841746557s 1.843884206s 1.847851831s 1.860887249s 1.866246861s 1.875574164s 1.904546774s 1.947450437s 1.955331539s 1.958231285s 1.980645506s 1.98576422s 2.013743629s 2.05486677s 2.060762591s 2.076358724s 2.11126815s 2.131468487s 2.222985068s 2.238330405s 2.291657s 2.335812364s 2.349966206s 2.350100305s 2.408083526s 2.453797445s 2.462162736s 2.47981406s 2.555390067s 2.581083453s 2.608557083s 2.644278987s] Dec 17 14:01:48.563: INFO: 50 %ile: 1.581623945s Dec 17 14:01:48.563: INFO: 90 %ile: 2.013743629s Dec 17 14:01:48.563: INFO: 99 %ile: 2.608557083s Dec 17 14:01:48.563: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 14:01:48.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-9896" for this suite. Dec 17 14:02:28.623: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 14:02:28.746: INFO: namespace svc-latency-9896 deletion completed in 40.171001417s • [SLOW TEST:72.538 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 14:02:28.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Dec 17 14:02:47.199: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 17 14:02:47.206: INFO: Pod pod-with-poststart-http-hook still exists Dec 17 14:02:49.207: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 17 14:02:49.213: INFO: Pod pod-with-poststart-http-hook still exists Dec 17 14:02:51.207: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 17 14:02:51.215: INFO: Pod pod-with-poststart-http-hook still exists Dec 17 14:02:53.207: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 17 14:02:53.219: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 14:02:53.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6970" for this suite. Dec 17 14:03:15.292: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 14:03:15.385: INFO: namespace container-lifecycle-hook-6970 deletion completed in 22.15687791s • [SLOW TEST:46.637 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 14:03:15.385: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Dec 17 14:03:15.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-4526' Dec 17 14:03:17.353: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Dec 17 14:03:17.353: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617 Dec 17 14:03:17.369: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-4526' Dec 17 14:03:17.640: INFO: stderr: "" Dec 17 14:03:17.640: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 14:03:17.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4526" for this suite. Dec 17 14:03:39.728: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 14:03:39.871: INFO: namespace kubectl-4526 deletion completed in 22.223470353s • [SLOW TEST:24.486 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 14:03:39.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name secret-emptykey-test-8c95a1d4-2e71-4fa9-9a10-9065a91fd240 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 17 14:03:39.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-971" for this suite. Dec 17 14:03:46.005: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 14:03:46.100: INFO: namespace secrets-971 deletion completed in 6.151914534s • [SLOW TEST:6.228 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 17 14:03:46.100: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 17 14:03:46.258: INFO: (0) /api/v1/nodes/iruya-node:10250/proxy/logs/:
alternatives.log
alternatives.l... (200; 24.935078ms)
Dec 17 14:03:46.276: INFO: (1) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 17.932565ms)
Dec 17 14:03:46.283: INFO: (2) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.028943ms)
Dec 17 14:03:46.287: INFO: (3) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.444872ms)
Dec 17 14:03:46.294: INFO: (4) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.314055ms)
Dec 17 14:03:46.300: INFO: (5) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.597276ms)
Dec 17 14:03:46.304: INFO: (6) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.524357ms)
Dec 17 14:03:46.308: INFO: (7) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.085044ms)
Dec 17 14:03:46.313: INFO: (8) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.775875ms)
Dec 17 14:03:46.317: INFO: (9) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.014116ms)
Dec 17 14:03:46.320: INFO: (10) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.292353ms)
Dec 17 14:03:46.325: INFO: (11) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.480293ms)
Dec 17 14:03:46.330: INFO: (12) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.903391ms)
Dec 17 14:03:46.334: INFO: (13) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.870905ms)
Dec 17 14:03:46.339: INFO: (14) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.736618ms)
Dec 17 14:03:46.344: INFO: (15) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.502494ms)
Dec 17 14:03:46.349: INFO: (16) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.09403ms)
Dec 17 14:03:46.354: INFO: (17) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.512068ms)
Dec 17 14:03:46.363: INFO: (18) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.544285ms)
Dec 17 14:03:46.368: INFO: (19) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.986783ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:03:46.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-2442" for this suite.
Dec 17 14:03:52.395: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:03:52.649: INFO: namespace proxy-2442 deletion completed in 6.276548204s

• [SLOW TEST:6.549 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:03:52.649: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 17 14:03:52.713: INFO: Waiting up to 5m0s for pod "downwardapi-volume-877a4795-8fdd-4dab-88f3-cfd3ad4647cb" in namespace "downward-api-9526" to be "success or failure"
Dec 17 14:03:52.717: INFO: Pod "downwardapi-volume-877a4795-8fdd-4dab-88f3-cfd3ad4647cb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.410554ms
Dec 17 14:03:54.735: INFO: Pod "downwardapi-volume-877a4795-8fdd-4dab-88f3-cfd3ad4647cb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022304695s
Dec 17 14:03:56.757: INFO: Pod "downwardapi-volume-877a4795-8fdd-4dab-88f3-cfd3ad4647cb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044270683s
Dec 17 14:03:58.767: INFO: Pod "downwardapi-volume-877a4795-8fdd-4dab-88f3-cfd3ad4647cb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054415177s
Dec 17 14:04:00.777: INFO: Pod "downwardapi-volume-877a4795-8fdd-4dab-88f3-cfd3ad4647cb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.063814495s
Dec 17 14:04:02.784: INFO: Pod "downwardapi-volume-877a4795-8fdd-4dab-88f3-cfd3ad4647cb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.071263087s
STEP: Saw pod success
Dec 17 14:04:02.784: INFO: Pod "downwardapi-volume-877a4795-8fdd-4dab-88f3-cfd3ad4647cb" satisfied condition "success or failure"
Dec 17 14:04:02.788: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-877a4795-8fdd-4dab-88f3-cfd3ad4647cb container client-container: 
STEP: delete the pod
Dec 17 14:04:03.781: INFO: Waiting for pod downwardapi-volume-877a4795-8fdd-4dab-88f3-cfd3ad4647cb to disappear
Dec 17 14:04:03.801: INFO: Pod downwardapi-volume-877a4795-8fdd-4dab-88f3-cfd3ad4647cb no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:04:03.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9526" for this suite.
Dec 17 14:04:09.928: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:04:10.076: INFO: namespace downward-api-9526 deletion completed in 6.257509991s

• [SLOW TEST:17.427 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:04:10.076: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 17 14:04:10.234: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3c667a6c-2ebd-41ff-96c2-ca00c4332ab1" in namespace "projected-2637" to be "success or failure"
Dec 17 14:04:10.244: INFO: Pod "downwardapi-volume-3c667a6c-2ebd-41ff-96c2-ca00c4332ab1": Phase="Pending", Reason="", readiness=false. Elapsed: 9.727208ms
Dec 17 14:04:12.257: INFO: Pod "downwardapi-volume-3c667a6c-2ebd-41ff-96c2-ca00c4332ab1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022440538s
Dec 17 14:04:14.548: INFO: Pod "downwardapi-volume-3c667a6c-2ebd-41ff-96c2-ca00c4332ab1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.31403814s
Dec 17 14:04:16.563: INFO: Pod "downwardapi-volume-3c667a6c-2ebd-41ff-96c2-ca00c4332ab1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.328684113s
Dec 17 14:04:18.578: INFO: Pod "downwardapi-volume-3c667a6c-2ebd-41ff-96c2-ca00c4332ab1": Phase="Running", Reason="", readiness=true. Elapsed: 8.344091739s
Dec 17 14:04:20.591: INFO: Pod "downwardapi-volume-3c667a6c-2ebd-41ff-96c2-ca00c4332ab1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.357205791s
STEP: Saw pod success
Dec 17 14:04:20.592: INFO: Pod "downwardapi-volume-3c667a6c-2ebd-41ff-96c2-ca00c4332ab1" satisfied condition "success or failure"
Dec 17 14:04:20.597: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-3c667a6c-2ebd-41ff-96c2-ca00c4332ab1 container client-container: 
STEP: delete the pod
Dec 17 14:04:20.699: INFO: Waiting for pod downwardapi-volume-3c667a6c-2ebd-41ff-96c2-ca00c4332ab1 to disappear
Dec 17 14:04:20.742: INFO: Pod downwardapi-volume-3c667a6c-2ebd-41ff-96c2-ca00c4332ab1 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:04:20.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2637" for this suite.
Dec 17 14:04:26.775: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:04:26.865: INFO: namespace projected-2637 deletion completed in 6.114213164s

• [SLOW TEST:16.789 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:04:26.865: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-135.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-135.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 17 14:04:41.058: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-135/dns-test-1732b70d-5a19-4feb-8b13-1c2ae45efabf: the server could not find the requested resource (get pods dns-test-1732b70d-5a19-4feb-8b13-1c2ae45efabf)
Dec 17 14:04:41.065: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-135/dns-test-1732b70d-5a19-4feb-8b13-1c2ae45efabf: the server could not find the requested resource (get pods dns-test-1732b70d-5a19-4feb-8b13-1c2ae45efabf)
Dec 17 14:04:41.079: INFO: Unable to read wheezy_udp@PodARecord from pod dns-135/dns-test-1732b70d-5a19-4feb-8b13-1c2ae45efabf: the server could not find the requested resource (get pods dns-test-1732b70d-5a19-4feb-8b13-1c2ae45efabf)
Dec 17 14:04:41.087: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-135/dns-test-1732b70d-5a19-4feb-8b13-1c2ae45efabf: the server could not find the requested resource (get pods dns-test-1732b70d-5a19-4feb-8b13-1c2ae45efabf)
Dec 17 14:04:41.093: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-135/dns-test-1732b70d-5a19-4feb-8b13-1c2ae45efabf: the server could not find the requested resource (get pods dns-test-1732b70d-5a19-4feb-8b13-1c2ae45efabf)
Dec 17 14:04:41.101: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-135/dns-test-1732b70d-5a19-4feb-8b13-1c2ae45efabf: the server could not find the requested resource (get pods dns-test-1732b70d-5a19-4feb-8b13-1c2ae45efabf)
Dec 17 14:04:41.107: INFO: Unable to read jessie_udp@PodARecord from pod dns-135/dns-test-1732b70d-5a19-4feb-8b13-1c2ae45efabf: the server could not find the requested resource (get pods dns-test-1732b70d-5a19-4feb-8b13-1c2ae45efabf)
Dec 17 14:04:41.115: INFO: Unable to read jessie_tcp@PodARecord from pod dns-135/dns-test-1732b70d-5a19-4feb-8b13-1c2ae45efabf: the server could not find the requested resource (get pods dns-test-1732b70d-5a19-4feb-8b13-1c2ae45efabf)
Dec 17 14:04:41.115: INFO: Lookups using dns-135/dns-test-1732b70d-5a19-4feb-8b13-1c2ae45efabf failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord]

Dec 17 14:04:46.179: INFO: DNS probes using dns-135/dns-test-1732b70d-5a19-4feb-8b13-1c2ae45efabf succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:04:46.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-135" for this suite.
Dec 17 14:04:54.348: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:04:54.487: INFO: namespace dns-135 deletion completed in 8.169414178s

• [SLOW TEST:27.622 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:04:54.488: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 17 14:04:54.632: INFO: Creating deployment "test-recreate-deployment"
Dec 17 14:04:54.661: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Dec 17 14:04:54.786: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Dec 17 14:04:56.808: INFO: Waiting deployment "test-recreate-deployment" to complete
Dec 17 14:04:56.811: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712188294, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712188294, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712188294, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712188294, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 14:04:58.834: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712188294, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712188294, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712188294, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712188294, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 14:05:00.817: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712188294, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712188294, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712188294, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712188294, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 14:05:02.827: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712188294, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712188294, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712188294, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712188294, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 14:05:04.821: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Dec 17 14:05:04.830: INFO: Updating deployment test-recreate-deployment
Dec 17 14:05:04.830: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Dec 17 14:05:05.188: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-4100,SelfLink:/apis/apps/v1/namespaces/deployment-4100/deployments/test-recreate-deployment,UID:9288a0bc-dc57-4e00-852a-30987276a108,ResourceVersion:17019708,Generation:2,CreationTimestamp:2019-12-17 14:04:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2019-12-17 14:05:05 +0000 UTC 2019-12-17 14:05:05 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2019-12-17 14:05:05 +0000 UTC 2019-12-17 14:04:54 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Dec 17 14:05:05.192: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-4100,SelfLink:/apis/apps/v1/namespaces/deployment-4100/replicasets/test-recreate-deployment-5c8c9cc69d,UID:c3e3f4d1-227b-4bf9-b027-5fd3ca4537f5,ResourceVersion:17019706,Generation:1,CreationTimestamp:2019-12-17 14:05:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 9288a0bc-dc57-4e00-852a-30987276a108 0xc002b80ca7 0xc002b80ca8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 17 14:05:05.192: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Dec 17 14:05:05.192: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-4100,SelfLink:/apis/apps/v1/namespaces/deployment-4100/replicasets/test-recreate-deployment-6df85df6b9,UID:d1cb2011-13b6-4a9b-b72f-c1e54232a4a7,ResourceVersion:17019696,Generation:2,CreationTimestamp:2019-12-17 14:04:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 9288a0bc-dc57-4e00-852a-30987276a108 0xc002b80d77 0xc002b80d78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 17 14:05:05.195: INFO: Pod "test-recreate-deployment-5c8c9cc69d-8qwx7" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-8qwx7,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-4100,SelfLink:/api/v1/namespaces/deployment-4100/pods/test-recreate-deployment-5c8c9cc69d-8qwx7,UID:269d2f03-5eb0-4b6a-8746-7cc27d5418ba,ResourceVersion:17019707,Generation:0,CreationTimestamp:2019-12-17 14:05:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d c3e3f4d1-227b-4bf9-b027-5fd3ca4537f5 0xc002b81667 0xc002b81668}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-b4wjd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-b4wjd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-b4wjd true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002b81a30} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002b81a50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 14:05:05 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 14:05:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 14:05:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 14:05:05 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-17 14:05:05 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:05:05.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-4100" for this suite.
Dec 17 14:05:13.228: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:05:13.358: INFO: namespace deployment-4100 deletion completed in 8.15953234s

• [SLOW TEST:18.871 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:05:13.359: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:06:07.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-5597" for this suite.
Dec 17 14:06:13.132: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:06:13.212: INFO: namespace container-runtime-5597 deletion completed in 6.154996788s

• [SLOW TEST:59.853 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:06:13.212: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Dec 17 14:06:13.320: INFO: Pod name pod-release: Found 0 pods out of 1
Dec 17 14:06:18.333: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:06:19.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-8101" for this suite.
Dec 17 14:06:25.397: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:06:25.586: INFO: namespace replication-controller-8101 deletion completed in 6.210332023s

• [SLOW TEST:12.374 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:06:25.587: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 17 14:06:25.741: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-6569'
Dec 17 14:06:25.978: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 17 14:06:25.978: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Dec 17 14:06:25.990: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Dec 17 14:06:26.050: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Dec 17 14:06:26.088: INFO: scanned /root for discovery docs: 
Dec 17 14:06:26.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-6569'
Dec 17 14:06:53.665: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Dec 17 14:06:53.665: INFO: stdout: "Created e2e-test-nginx-rc-10c03d534e9899eb552a6c3c77a456b7\nScaling up e2e-test-nginx-rc-10c03d534e9899eb552a6c3c77a456b7 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-10c03d534e9899eb552a6c3c77a456b7 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-10c03d534e9899eb552a6c3c77a456b7 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Dec 17 14:06:53.665: INFO: stdout: "Created e2e-test-nginx-rc-10c03d534e9899eb552a6c3c77a456b7\nScaling up e2e-test-nginx-rc-10c03d534e9899eb552a6c3c77a456b7 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-10c03d534e9899eb552a6c3c77a456b7 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-10c03d534e9899eb552a6c3c77a456b7 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Dec 17 14:06:53.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-6569'
Dec 17 14:06:53.834: INFO: stderr: ""
Dec 17 14:06:53.834: INFO: stdout: "e2e-test-nginx-rc-10c03d534e9899eb552a6c3c77a456b7-7x48n "
Dec 17 14:06:53.835: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-10c03d534e9899eb552a6c3c77a456b7-7x48n -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6569'
Dec 17 14:06:54.020: INFO: stderr: ""
Dec 17 14:06:54.021: INFO: stdout: "true"
Dec 17 14:06:54.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-10c03d534e9899eb552a6c3c77a456b7-7x48n -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6569'
Dec 17 14:06:54.201: INFO: stderr: ""
Dec 17 14:06:54.202: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Dec 17 14:06:54.202: INFO: e2e-test-nginx-rc-10c03d534e9899eb552a6c3c77a456b7-7x48n is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522
Dec 17 14:06:54.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-6569'
Dec 17 14:06:54.324: INFO: stderr: ""
Dec 17 14:06:54.325: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:06:54.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6569" for this suite.
Dec 17 14:07:16.394: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:07:16.526: INFO: namespace kubectl-6569 deletion completed in 22.152119305s

• [SLOW TEST:50.939 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:07:16.528: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7052.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-7052.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7052.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-7052.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7052.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-7052.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7052.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-7052.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7052.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-7052.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7052.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-7052.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7052.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 160.190.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.190.160_udp@PTR;check="$$(dig +tcp +noall +answer +search 160.190.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.190.160_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7052.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-7052.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7052.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-7052.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7052.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-7052.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7052.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-7052.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7052.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-7052.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7052.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-7052.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7052.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 160.190.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.190.160_udp@PTR;check="$$(dig +tcp +noall +answer +search 160.190.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.190.160_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 17 14:07:30.845: INFO: Unable to read wheezy_udp@dns-test-service.dns-7052.svc.cluster.local from pod dns-7052/dns-test-83a8081a-b99b-4b8d-8dd2-9b3a9f38916f: the server could not find the requested resource (get pods dns-test-83a8081a-b99b-4b8d-8dd2-9b3a9f38916f)
Dec 17 14:07:30.858: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7052.svc.cluster.local from pod dns-7052/dns-test-83a8081a-b99b-4b8d-8dd2-9b3a9f38916f: the server could not find the requested resource (get pods dns-test-83a8081a-b99b-4b8d-8dd2-9b3a9f38916f)
Dec 17 14:07:30.873: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7052.svc.cluster.local from pod dns-7052/dns-test-83a8081a-b99b-4b8d-8dd2-9b3a9f38916f: the server could not find the requested resource (get pods dns-test-83a8081a-b99b-4b8d-8dd2-9b3a9f38916f)
Dec 17 14:07:30.888: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7052.svc.cluster.local from pod dns-7052/dns-test-83a8081a-b99b-4b8d-8dd2-9b3a9f38916f: the server could not find the requested resource (get pods dns-test-83a8081a-b99b-4b8d-8dd2-9b3a9f38916f)
Dec 17 14:07:30.904: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-7052.svc.cluster.local from pod dns-7052/dns-test-83a8081a-b99b-4b8d-8dd2-9b3a9f38916f: the server could not find the requested resource (get pods dns-test-83a8081a-b99b-4b8d-8dd2-9b3a9f38916f)
Dec 17 14:07:30.914: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-7052.svc.cluster.local from pod dns-7052/dns-test-83a8081a-b99b-4b8d-8dd2-9b3a9f38916f: the server could not find the requested resource (get pods dns-test-83a8081a-b99b-4b8d-8dd2-9b3a9f38916f)
Dec 17 14:07:30.924: INFO: Unable to read wheezy_udp@PodARecord from pod dns-7052/dns-test-83a8081a-b99b-4b8d-8dd2-9b3a9f38916f: the server could not find the requested resource (get pods dns-test-83a8081a-b99b-4b8d-8dd2-9b3a9f38916f)
Dec 17 14:07:30.931: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7052/dns-test-83a8081a-b99b-4b8d-8dd2-9b3a9f38916f: the server could not find the requested resource (get pods dns-test-83a8081a-b99b-4b8d-8dd2-9b3a9f38916f)
Dec 17 14:07:30.938: INFO: Unable to read 10.107.190.160_udp@PTR from pod dns-7052/dns-test-83a8081a-b99b-4b8d-8dd2-9b3a9f38916f: the server could not find the requested resource (get pods dns-test-83a8081a-b99b-4b8d-8dd2-9b3a9f38916f)
Dec 17 14:07:30.942: INFO: Unable to read 10.107.190.160_tcp@PTR from pod dns-7052/dns-test-83a8081a-b99b-4b8d-8dd2-9b3a9f38916f: the server could not find the requested resource (get pods dns-test-83a8081a-b99b-4b8d-8dd2-9b3a9f38916f)
Dec 17 14:07:30.946: INFO: Unable to read jessie_udp@dns-test-service.dns-7052.svc.cluster.local from pod dns-7052/dns-test-83a8081a-b99b-4b8d-8dd2-9b3a9f38916f: the server could not find the requested resource (get pods dns-test-83a8081a-b99b-4b8d-8dd2-9b3a9f38916f)
Dec 17 14:07:30.948: INFO: Unable to read jessie_tcp@dns-test-service.dns-7052.svc.cluster.local from pod dns-7052/dns-test-83a8081a-b99b-4b8d-8dd2-9b3a9f38916f: the server could not find the requested resource (get pods dns-test-83a8081a-b99b-4b8d-8dd2-9b3a9f38916f)
Dec 17 14:07:30.951: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7052.svc.cluster.local from pod dns-7052/dns-test-83a8081a-b99b-4b8d-8dd2-9b3a9f38916f: the server could not find the requested resource (get pods dns-test-83a8081a-b99b-4b8d-8dd2-9b3a9f38916f)
Dec 17 14:07:30.954: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7052.svc.cluster.local from pod dns-7052/dns-test-83a8081a-b99b-4b8d-8dd2-9b3a9f38916f: the server could not find the requested resource (get pods dns-test-83a8081a-b99b-4b8d-8dd2-9b3a9f38916f)
Dec 17 14:07:30.957: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-7052.svc.cluster.local from pod dns-7052/dns-test-83a8081a-b99b-4b8d-8dd2-9b3a9f38916f: the server could not find the requested resource (get pods dns-test-83a8081a-b99b-4b8d-8dd2-9b3a9f38916f)
Dec 17 14:07:30.961: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-7052.svc.cluster.local from pod dns-7052/dns-test-83a8081a-b99b-4b8d-8dd2-9b3a9f38916f: the server could not find the requested resource (get pods dns-test-83a8081a-b99b-4b8d-8dd2-9b3a9f38916f)
Dec 17 14:07:30.966: INFO: Unable to read jessie_udp@PodARecord from pod dns-7052/dns-test-83a8081a-b99b-4b8d-8dd2-9b3a9f38916f: the server could not find the requested resource (get pods dns-test-83a8081a-b99b-4b8d-8dd2-9b3a9f38916f)
Dec 17 14:07:30.969: INFO: Unable to read jessie_tcp@PodARecord from pod dns-7052/dns-test-83a8081a-b99b-4b8d-8dd2-9b3a9f38916f: the server could not find the requested resource (get pods dns-test-83a8081a-b99b-4b8d-8dd2-9b3a9f38916f)
Dec 17 14:07:30.972: INFO: Unable to read 10.107.190.160_udp@PTR from pod dns-7052/dns-test-83a8081a-b99b-4b8d-8dd2-9b3a9f38916f: the server could not find the requested resource (get pods dns-test-83a8081a-b99b-4b8d-8dd2-9b3a9f38916f)
Dec 17 14:07:30.975: INFO: Unable to read 10.107.190.160_tcp@PTR from pod dns-7052/dns-test-83a8081a-b99b-4b8d-8dd2-9b3a9f38916f: the server could not find the requested resource (get pods dns-test-83a8081a-b99b-4b8d-8dd2-9b3a9f38916f)
Dec 17 14:07:30.975: INFO: Lookups using dns-7052/dns-test-83a8081a-b99b-4b8d-8dd2-9b3a9f38916f failed for: [wheezy_udp@dns-test-service.dns-7052.svc.cluster.local wheezy_tcp@dns-test-service.dns-7052.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7052.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7052.svc.cluster.local wheezy_udp@_http._tcp.test-service-2.dns-7052.svc.cluster.local wheezy_tcp@_http._tcp.test-service-2.dns-7052.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.107.190.160_udp@PTR 10.107.190.160_tcp@PTR jessie_udp@dns-test-service.dns-7052.svc.cluster.local jessie_tcp@dns-test-service.dns-7052.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7052.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7052.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-7052.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-7052.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord 10.107.190.160_udp@PTR 10.107.190.160_tcp@PTR]

Dec 17 14:07:36.099: INFO: DNS probes using dns-7052/dns-test-83a8081a-b99b-4b8d-8dd2-9b3a9f38916f succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:07:36.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7052" for this suite.
Dec 17 14:07:42.504: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:07:42.654: INFO: namespace dns-7052 deletion completed in 6.182903214s

• [SLOW TEST:26.127 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:07:42.654: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Dec 17 14:07:42.876: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7514,SelfLink:/api/v1/namespaces/watch-7514/configmaps/e2e-watch-test-configmap-a,UID:881e4fd3-4ad1-40d1-9cc7-113d0f815877,ResourceVersion:17020195,Generation:0,CreationTimestamp:2019-12-17 14:07:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 17 14:07:42.877: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7514,SelfLink:/api/v1/namespaces/watch-7514/configmaps/e2e-watch-test-configmap-a,UID:881e4fd3-4ad1-40d1-9cc7-113d0f815877,ResourceVersion:17020195,Generation:0,CreationTimestamp:2019-12-17 14:07:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Dec 17 14:07:52.896: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7514,SelfLink:/api/v1/namespaces/watch-7514/configmaps/e2e-watch-test-configmap-a,UID:881e4fd3-4ad1-40d1-9cc7-113d0f815877,ResourceVersion:17020210,Generation:0,CreationTimestamp:2019-12-17 14:07:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Dec 17 14:07:52.896: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7514,SelfLink:/api/v1/namespaces/watch-7514/configmaps/e2e-watch-test-configmap-a,UID:881e4fd3-4ad1-40d1-9cc7-113d0f815877,ResourceVersion:17020210,Generation:0,CreationTimestamp:2019-12-17 14:07:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Dec 17 14:08:03.079: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7514,SelfLink:/api/v1/namespaces/watch-7514/configmaps/e2e-watch-test-configmap-a,UID:881e4fd3-4ad1-40d1-9cc7-113d0f815877,ResourceVersion:17020224,Generation:0,CreationTimestamp:2019-12-17 14:07:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 17 14:08:03.080: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7514,SelfLink:/api/v1/namespaces/watch-7514/configmaps/e2e-watch-test-configmap-a,UID:881e4fd3-4ad1-40d1-9cc7-113d0f815877,ResourceVersion:17020224,Generation:0,CreationTimestamp:2019-12-17 14:07:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Dec 17 14:08:13.108: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7514,SelfLink:/api/v1/namespaces/watch-7514/configmaps/e2e-watch-test-configmap-a,UID:881e4fd3-4ad1-40d1-9cc7-113d0f815877,ResourceVersion:17020238,Generation:0,CreationTimestamp:2019-12-17 14:07:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 17 14:08:13.108: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7514,SelfLink:/api/v1/namespaces/watch-7514/configmaps/e2e-watch-test-configmap-a,UID:881e4fd3-4ad1-40d1-9cc7-113d0f815877,ResourceVersion:17020238,Generation:0,CreationTimestamp:2019-12-17 14:07:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Dec 17 14:08:23.137: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-7514,SelfLink:/api/v1/namespaces/watch-7514/configmaps/e2e-watch-test-configmap-b,UID:2edc3282-c7f5-4a35-9cb4-4dbee3e32138,ResourceVersion:17020252,Generation:0,CreationTimestamp:2019-12-17 14:08:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 17 14:08:23.138: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-7514,SelfLink:/api/v1/namespaces/watch-7514/configmaps/e2e-watch-test-configmap-b,UID:2edc3282-c7f5-4a35-9cb4-4dbee3e32138,ResourceVersion:17020252,Generation:0,CreationTimestamp:2019-12-17 14:08:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Dec 17 14:08:33.159: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-7514,SelfLink:/api/v1/namespaces/watch-7514/configmaps/e2e-watch-test-configmap-b,UID:2edc3282-c7f5-4a35-9cb4-4dbee3e32138,ResourceVersion:17020267,Generation:0,CreationTimestamp:2019-12-17 14:08:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 17 14:08:33.159: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-7514,SelfLink:/api/v1/namespaces/watch-7514/configmaps/e2e-watch-test-configmap-b,UID:2edc3282-c7f5-4a35-9cb4-4dbee3e32138,ResourceVersion:17020267,Generation:0,CreationTimestamp:2019-12-17 14:08:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:08:43.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-7514" for this suite.
Dec 17 14:08:49.213: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:08:49.351: INFO: namespace watch-7514 deletion completed in 6.179851415s

• [SLOW TEST:66.697 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:08:49.352: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating replication controller my-hostname-basic-10d8822d-d940-4f46-bccd-ebf4d7cb8bd7
Dec 17 14:08:49.682: INFO: Pod name my-hostname-basic-10d8822d-d940-4f46-bccd-ebf4d7cb8bd7: Found 1 pods out of 1
Dec 17 14:08:49.682: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-10d8822d-d940-4f46-bccd-ebf4d7cb8bd7" are running
Dec 17 14:08:59.207: INFO: Pod "my-hostname-basic-10d8822d-d940-4f46-bccd-ebf4d7cb8bd7-5n2p4" is running (conditions: [{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-17 14:08:49 +0000 UTC Reason: Message:}])
Dec 17 14:08:59.208: INFO: Trying to dial the pod
Dec 17 14:09:04.262: INFO: Controller my-hostname-basic-10d8822d-d940-4f46-bccd-ebf4d7cb8bd7: Got expected result from replica 1 [my-hostname-basic-10d8822d-d940-4f46-bccd-ebf4d7cb8bd7-5n2p4]: "my-hostname-basic-10d8822d-d940-4f46-bccd-ebf4d7cb8bd7-5n2p4", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:09:04.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-7986" for this suite.
Dec 17 14:09:10.304: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:09:10.425: INFO: namespace replication-controller-7986 deletion completed in 6.148038042s

• [SLOW TEST:21.073 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:09:10.427: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-815
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Dec 17 14:09:10.596: INFO: Found 0 stateful pods, waiting for 3
Dec 17 14:09:20.620: INFO: Found 2 stateful pods, waiting for 3
Dec 17 14:09:30.618: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 17 14:09:30.619: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 17 14:09:30.619: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 17 14:09:40.612: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 17 14:09:40.612: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 17 14:09:40.612: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Dec 17 14:09:40.651: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Dec 17 14:09:50.735: INFO: Updating stateful set ss2
Dec 17 14:09:50.746: INFO: Waiting for Pod statefulset-815/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 17 14:10:00.763: INFO: Waiting for Pod statefulset-815/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Dec 17 14:10:11.312: INFO: Found 2 stateful pods, waiting for 3
Dec 17 14:10:21.325: INFO: Found 2 stateful pods, waiting for 3
Dec 17 14:10:31.328: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 17 14:10:31.328: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 17 14:10:31.328: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Dec 17 14:10:31.366: INFO: Updating stateful set ss2
Dec 17 14:10:31.448: INFO: Waiting for Pod statefulset-815/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 17 14:10:41.466: INFO: Waiting for Pod statefulset-815/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 17 14:10:51.493: INFO: Updating stateful set ss2
Dec 17 14:10:51.536: INFO: Waiting for StatefulSet statefulset-815/ss2 to complete update
Dec 17 14:10:51.536: INFO: Waiting for Pod statefulset-815/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 17 14:11:01.551: INFO: Waiting for StatefulSet statefulset-815/ss2 to complete update
Dec 17 14:11:01.551: INFO: Waiting for Pod statefulset-815/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 17 14:11:11.555: INFO: Waiting for StatefulSet statefulset-815/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Dec 17 14:11:21.549: INFO: Deleting all statefulset in ns statefulset-815
Dec 17 14:11:21.554: INFO: Scaling statefulset ss2 to 0
Dec 17 14:11:51.587: INFO: Waiting for statefulset status.replicas updated to 0
Dec 17 14:11:51.592: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:11:51.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-815" for this suite.
Dec 17 14:12:00.279: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:12:00.381: INFO: namespace statefulset-815 deletion completed in 8.724736721s

• [SLOW TEST:169.954 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:12:00.381: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:12:05.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-4914" for this suite.
Dec 17 14:12:12.053: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:12:12.177: INFO: namespace watch-4914 deletion completed in 6.264290652s

• [SLOW TEST:11.796 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:12:12.177: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-8ff02c99-082e-443b-8d7e-dde2cda078cb
STEP: Creating configMap with name cm-test-opt-upd-26d06126-c51c-4973-96bd-619b673942a8
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-8ff02c99-082e-443b-8d7e-dde2cda078cb
STEP: Updating configmap cm-test-opt-upd-26d06126-c51c-4973-96bd-619b673942a8
STEP: Creating configMap with name cm-test-opt-create-61275e58-8ed0-4621-a1d9-2ce8f3343f9e
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:12:29.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1372" for this suite.
Dec 17 14:12:51.135: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:12:51.247: INFO: namespace configmap-1372 deletion completed in 22.229665802s

• [SLOW TEST:39.070 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:12:51.248: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-89667b85-5d22-4314-aa3a-dcf0e6fe6de9
STEP: Creating a pod to test consume secrets
Dec 17 14:12:51.358: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-41c0a286-04d8-4d17-a7c4-dda87b1431fc" in namespace "projected-7518" to be "success or failure"
Dec 17 14:12:51.386: INFO: Pod "pod-projected-secrets-41c0a286-04d8-4d17-a7c4-dda87b1431fc": Phase="Pending", Reason="", readiness=false. Elapsed: 27.677136ms
Dec 17 14:12:53.403: INFO: Pod "pod-projected-secrets-41c0a286-04d8-4d17-a7c4-dda87b1431fc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044734926s
Dec 17 14:12:55.460: INFO: Pod "pod-projected-secrets-41c0a286-04d8-4d17-a7c4-dda87b1431fc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.10208127s
Dec 17 14:12:57.472: INFO: Pod "pod-projected-secrets-41c0a286-04d8-4d17-a7c4-dda87b1431fc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.114144184s
Dec 17 14:12:59.482: INFO: Pod "pod-projected-secrets-41c0a286-04d8-4d17-a7c4-dda87b1431fc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.123954559s
Dec 17 14:13:01.493: INFO: Pod "pod-projected-secrets-41c0a286-04d8-4d17-a7c4-dda87b1431fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.135064431s
STEP: Saw pod success
Dec 17 14:13:01.493: INFO: Pod "pod-projected-secrets-41c0a286-04d8-4d17-a7c4-dda87b1431fc" satisfied condition "success or failure"
Dec 17 14:13:01.500: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-41c0a286-04d8-4d17-a7c4-dda87b1431fc container projected-secret-volume-test: 
STEP: delete the pod
Dec 17 14:13:01.853: INFO: Waiting for pod pod-projected-secrets-41c0a286-04d8-4d17-a7c4-dda87b1431fc to disappear
Dec 17 14:13:01.863: INFO: Pod pod-projected-secrets-41c0a286-04d8-4d17-a7c4-dda87b1431fc no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:13:01.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7518" for this suite.
Dec 17 14:13:07.968: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:13:08.111: INFO: namespace projected-7518 deletion completed in 6.184375705s

• [SLOW TEST:16.863 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:13:08.112: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-c126f60c-747e-4c51-a04b-244a11c1b8d4
STEP: Creating a pod to test consume configMaps
Dec 17 14:13:08.255: INFO: Waiting up to 5m0s for pod "pod-configmaps-13af9faf-2e13-42f2-ae63-201c04d28de3" in namespace "configmap-5610" to be "success or failure"
Dec 17 14:13:08.306: INFO: Pod "pod-configmaps-13af9faf-2e13-42f2-ae63-201c04d28de3": Phase="Pending", Reason="", readiness=false. Elapsed: 50.671646ms
Dec 17 14:13:10.326: INFO: Pod "pod-configmaps-13af9faf-2e13-42f2-ae63-201c04d28de3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070209815s
Dec 17 14:13:12.336: INFO: Pod "pod-configmaps-13af9faf-2e13-42f2-ae63-201c04d28de3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.080216638s
Dec 17 14:13:14.352: INFO: Pod "pod-configmaps-13af9faf-2e13-42f2-ae63-201c04d28de3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.096843821s
Dec 17 14:13:16.364: INFO: Pod "pod-configmaps-13af9faf-2e13-42f2-ae63-201c04d28de3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.108362249s
STEP: Saw pod success
Dec 17 14:13:16.364: INFO: Pod "pod-configmaps-13af9faf-2e13-42f2-ae63-201c04d28de3" satisfied condition "success or failure"
Dec 17 14:13:16.375: INFO: Trying to get logs from node iruya-node pod pod-configmaps-13af9faf-2e13-42f2-ae63-201c04d28de3 container configmap-volume-test: 
STEP: delete the pod
Dec 17 14:13:16.470: INFO: Waiting for pod pod-configmaps-13af9faf-2e13-42f2-ae63-201c04d28de3 to disappear
Dec 17 14:13:16.489: INFO: Pod pod-configmaps-13af9faf-2e13-42f2-ae63-201c04d28de3 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:13:16.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5610" for this suite.
Dec 17 14:13:22.604: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:13:22.805: INFO: namespace configmap-5610 deletion completed in 6.308001246s

• [SLOW TEST:14.693 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:13:22.806: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-downwardapi-pwq5
STEP: Creating a pod to test atomic-volume-subpath
Dec 17 14:13:23.034: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-pwq5" in namespace "subpath-5609" to be "success or failure"
Dec 17 14:13:23.042: INFO: Pod "pod-subpath-test-downwardapi-pwq5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.990635ms
Dec 17 14:13:25.054: INFO: Pod "pod-subpath-test-downwardapi-pwq5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019545438s
Dec 17 14:13:27.065: INFO: Pod "pod-subpath-test-downwardapi-pwq5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030062716s
Dec 17 14:13:29.118: INFO: Pod "pod-subpath-test-downwardapi-pwq5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.083823777s
Dec 17 14:13:31.129: INFO: Pod "pod-subpath-test-downwardapi-pwq5": Phase="Running", Reason="", readiness=true. Elapsed: 8.094064884s
Dec 17 14:13:33.139: INFO: Pod "pod-subpath-test-downwardapi-pwq5": Phase="Running", Reason="", readiness=true. Elapsed: 10.104067139s
Dec 17 14:13:35.147: INFO: Pod "pod-subpath-test-downwardapi-pwq5": Phase="Running", Reason="", readiness=true. Elapsed: 12.112720095s
Dec 17 14:13:37.154: INFO: Pod "pod-subpath-test-downwardapi-pwq5": Phase="Running", Reason="", readiness=true. Elapsed: 14.119099593s
Dec 17 14:13:39.172: INFO: Pod "pod-subpath-test-downwardapi-pwq5": Phase="Running", Reason="", readiness=true. Elapsed: 16.137490867s
Dec 17 14:13:41.183: INFO: Pod "pod-subpath-test-downwardapi-pwq5": Phase="Running", Reason="", readiness=true. Elapsed: 18.148285042s
Dec 17 14:13:43.197: INFO: Pod "pod-subpath-test-downwardapi-pwq5": Phase="Running", Reason="", readiness=true. Elapsed: 20.162630227s
Dec 17 14:13:45.212: INFO: Pod "pod-subpath-test-downwardapi-pwq5": Phase="Running", Reason="", readiness=true. Elapsed: 22.176956076s
Dec 17 14:13:47.221: INFO: Pod "pod-subpath-test-downwardapi-pwq5": Phase="Running", Reason="", readiness=true. Elapsed: 24.186346363s
Dec 17 14:13:49.232: INFO: Pod "pod-subpath-test-downwardapi-pwq5": Phase="Running", Reason="", readiness=true. Elapsed: 26.197181071s
Dec 17 14:13:51.243: INFO: Pod "pod-subpath-test-downwardapi-pwq5": Phase="Running", Reason="", readiness=true. Elapsed: 28.208782647s
Dec 17 14:13:53.254: INFO: Pod "pod-subpath-test-downwardapi-pwq5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.219326414s
STEP: Saw pod success
Dec 17 14:13:53.254: INFO: Pod "pod-subpath-test-downwardapi-pwq5" satisfied condition "success or failure"
Dec 17 14:13:53.258: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-downwardapi-pwq5 container test-container-subpath-downwardapi-pwq5: 
STEP: delete the pod
Dec 17 14:13:53.314: INFO: Waiting for pod pod-subpath-test-downwardapi-pwq5 to disappear
Dec 17 14:13:53.318: INFO: Pod pod-subpath-test-downwardapi-pwq5 no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-pwq5
Dec 17 14:13:53.318: INFO: Deleting pod "pod-subpath-test-downwardapi-pwq5" in namespace "subpath-5609"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:13:53.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-5609" for this suite.
Dec 17 14:13:59.347: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:13:59.495: INFO: namespace subpath-5609 deletion completed in 6.169909391s

• [SLOW TEST:36.690 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:13:59.496: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-5396
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 17 14:13:59.649: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 17 14:14:33.995: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-5396 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 17 14:14:33.995: INFO: >>> kubeConfig: /root/.kube/config
Dec 17 14:14:34.506: INFO: Waiting for endpoints: map[]
Dec 17 14:14:34.514: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-5396 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 17 14:14:34.515: INFO: >>> kubeConfig: /root/.kube/config
Dec 17 14:14:35.017: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:14:35.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-5396" for this suite.
Dec 17 14:14:59.055: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:14:59.181: INFO: namespace pod-network-test-5396 deletion completed in 24.148958228s

• [SLOW TEST:59.685 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:14:59.181: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 17 14:14:59.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-149'
Dec 17 14:15:01.415: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 17 14:15:01.415: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Dec 17 14:15:01.501: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-2gkpt]
Dec 17 14:15:01.501: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-2gkpt" in namespace "kubectl-149" to be "running and ready"
Dec 17 14:15:01.527: INFO: Pod "e2e-test-nginx-rc-2gkpt": Phase="Pending", Reason="", readiness=false. Elapsed: 25.591263ms
Dec 17 14:15:03.538: INFO: Pod "e2e-test-nginx-rc-2gkpt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036427715s
Dec 17 14:15:05.549: INFO: Pod "e2e-test-nginx-rc-2gkpt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047367177s
Dec 17 14:15:07.556: INFO: Pod "e2e-test-nginx-rc-2gkpt": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054426072s
Dec 17 14:15:09.567: INFO: Pod "e2e-test-nginx-rc-2gkpt": Phase="Pending", Reason="", readiness=false. Elapsed: 8.065790713s
Dec 17 14:15:11.576: INFO: Pod "e2e-test-nginx-rc-2gkpt": Phase="Running", Reason="", readiness=true. Elapsed: 10.074308832s
Dec 17 14:15:11.576: INFO: Pod "e2e-test-nginx-rc-2gkpt" satisfied condition "running and ready"
Dec 17 14:15:11.576: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-2gkpt]
Dec 17 14:15:11.577: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-149'
Dec 17 14:15:11.826: INFO: stderr: ""
Dec 17 14:15:11.826: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461
Dec 17 14:15:11.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-149'
Dec 17 14:15:11.992: INFO: stderr: ""
Dec 17 14:15:11.992: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:15:11.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-149" for this suite.
Dec 17 14:15:34.023: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:15:34.121: INFO: namespace kubectl-149 deletion completed in 22.123950326s

• [SLOW TEST:34.941 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:15:34.122: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Dec 17 14:15:42.467: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:15:42.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-9687" for this suite.
Dec 17 14:15:48.658: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:15:48.746: INFO: namespace container-runtime-9687 deletion completed in 6.245261764s

• [SLOW TEST:14.624 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:15:48.747: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Dec 17 14:15:49.038: INFO: Waiting up to 5m0s for pod "downward-api-c07b8e8c-3b74-49de-aa9e-7023f5fd6fc4" in namespace "downward-api-3030" to be "success or failure"
Dec 17 14:15:49.057: INFO: Pod "downward-api-c07b8e8c-3b74-49de-aa9e-7023f5fd6fc4": Phase="Pending", Reason="", readiness=false. Elapsed: 18.282469ms
Dec 17 14:15:51.068: INFO: Pod "downward-api-c07b8e8c-3b74-49de-aa9e-7023f5fd6fc4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029711928s
Dec 17 14:15:53.083: INFO: Pod "downward-api-c07b8e8c-3b74-49de-aa9e-7023f5fd6fc4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044210189s
Dec 17 14:15:55.094: INFO: Pod "downward-api-c07b8e8c-3b74-49de-aa9e-7023f5fd6fc4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055326248s
Dec 17 14:15:57.103: INFO: Pod "downward-api-c07b8e8c-3b74-49de-aa9e-7023f5fd6fc4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.064867254s
Dec 17 14:15:59.115: INFO: Pod "downward-api-c07b8e8c-3b74-49de-aa9e-7023f5fd6fc4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.076566256s
STEP: Saw pod success
Dec 17 14:15:59.115: INFO: Pod "downward-api-c07b8e8c-3b74-49de-aa9e-7023f5fd6fc4" satisfied condition "success or failure"
Dec 17 14:15:59.120: INFO: Trying to get logs from node iruya-node pod downward-api-c07b8e8c-3b74-49de-aa9e-7023f5fd6fc4 container dapi-container: 
STEP: delete the pod
Dec 17 14:15:59.293: INFO: Waiting for pod downward-api-c07b8e8c-3b74-49de-aa9e-7023f5fd6fc4 to disappear
Dec 17 14:15:59.309: INFO: Pod downward-api-c07b8e8c-3b74-49de-aa9e-7023f5fd6fc4 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:15:59.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3030" for this suite.
Dec 17 14:16:05.341: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:16:05.472: INFO: namespace downward-api-3030 deletion completed in 6.154826145s

• [SLOW TEST:16.726 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:16:05.473: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Dec 17 14:16:05.550: INFO: Waiting up to 5m0s for pod "pod-f7b46537-91eb-43aa-9221-ae25d781af27" in namespace "emptydir-3676" to be "success or failure"
Dec 17 14:16:05.561: INFO: Pod "pod-f7b46537-91eb-43aa-9221-ae25d781af27": Phase="Pending", Reason="", readiness=false. Elapsed: 11.65176ms
Dec 17 14:16:07.569: INFO: Pod "pod-f7b46537-91eb-43aa-9221-ae25d781af27": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019130811s
Dec 17 14:16:09.579: INFO: Pod "pod-f7b46537-91eb-43aa-9221-ae25d781af27": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029051344s
Dec 17 14:16:11.588: INFO: Pod "pod-f7b46537-91eb-43aa-9221-ae25d781af27": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038310117s
Dec 17 14:16:13.608: INFO: Pod "pod-f7b46537-91eb-43aa-9221-ae25d781af27": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.057858419s
STEP: Saw pod success
Dec 17 14:16:13.608: INFO: Pod "pod-f7b46537-91eb-43aa-9221-ae25d781af27" satisfied condition "success or failure"
Dec 17 14:16:13.615: INFO: Trying to get logs from node iruya-node pod pod-f7b46537-91eb-43aa-9221-ae25d781af27 container test-container: 
STEP: delete the pod
Dec 17 14:16:13.731: INFO: Waiting for pod pod-f7b46537-91eb-43aa-9221-ae25d781af27 to disappear
Dec 17 14:16:13.740: INFO: Pod pod-f7b46537-91eb-43aa-9221-ae25d781af27 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:16:13.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3676" for this suite.
Dec 17 14:16:19.882: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:16:19.963: INFO: namespace emptydir-3676 deletion completed in 6.215159775s

• [SLOW TEST:14.490 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:16:19.964: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Dec 17 14:16:20.048: INFO: namespace kubectl-7960
Dec 17 14:16:20.048: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7960'
Dec 17 14:16:20.389: INFO: stderr: ""
Dec 17 14:16:20.390: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Dec 17 14:16:21.402: INFO: Selector matched 1 pods for map[app:redis]
Dec 17 14:16:21.402: INFO: Found 0 / 1
Dec 17 14:16:22.403: INFO: Selector matched 1 pods for map[app:redis]
Dec 17 14:16:22.403: INFO: Found 0 / 1
Dec 17 14:16:23.412: INFO: Selector matched 1 pods for map[app:redis]
Dec 17 14:16:23.413: INFO: Found 0 / 1
Dec 17 14:16:24.402: INFO: Selector matched 1 pods for map[app:redis]
Dec 17 14:16:24.403: INFO: Found 0 / 1
Dec 17 14:16:25.399: INFO: Selector matched 1 pods for map[app:redis]
Dec 17 14:16:25.399: INFO: Found 0 / 1
Dec 17 14:16:26.398: INFO: Selector matched 1 pods for map[app:redis]
Dec 17 14:16:26.398: INFO: Found 0 / 1
Dec 17 14:16:27.421: INFO: Selector matched 1 pods for map[app:redis]
Dec 17 14:16:27.421: INFO: Found 0 / 1
Dec 17 14:16:28.400: INFO: Selector matched 1 pods for map[app:redis]
Dec 17 14:16:28.400: INFO: Found 1 / 1
Dec 17 14:16:28.400: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Dec 17 14:16:28.405: INFO: Selector matched 1 pods for map[app:redis]
Dec 17 14:16:28.405: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Dec 17 14:16:28.405: INFO: wait on redis-master startup in kubectl-7960 
Dec 17 14:16:28.405: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-lqbpj redis-master --namespace=kubectl-7960'
Dec 17 14:16:28.698: INFO: stderr: ""
Dec 17 14:16:28.698: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 17 Dec 14:16:27.104 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 17 Dec 14:16:27.104 # Server started, Redis version 3.2.12\n1:M 17 Dec 14:16:27.104 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 17 Dec 14:16:27.104 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Dec 17 14:16:28.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-7960'
Dec 17 14:16:28.926: INFO: stderr: ""
Dec 17 14:16:28.927: INFO: stdout: "service/rm2 exposed\n"
Dec 17 14:16:29.017: INFO: Service rm2 in namespace kubectl-7960 found.
STEP: exposing service
Dec 17 14:16:31.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-7960'
Dec 17 14:16:31.388: INFO: stderr: ""
Dec 17 14:16:31.389: INFO: stdout: "service/rm3 exposed\n"
Dec 17 14:16:31.425: INFO: Service rm3 in namespace kubectl-7960 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:16:33.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7960" for this suite.
Dec 17 14:16:57.489: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:16:57.661: INFO: namespace kubectl-7960 deletion completed in 24.209478144s

• [SLOW TEST:37.698 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:16:57.662: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Dec 17 14:16:57.767: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 17 14:16:57.778: INFO: Waiting for terminating namespaces to be deleted...
Dec 17 14:16:57.782: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Dec 17 14:16:57.809: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Dec 17 14:16:57.809: INFO: 	Container weave ready: true, restart count 0
Dec 17 14:16:57.809: INFO: 	Container weave-npc ready: true, restart count 0
Dec 17 14:16:57.809: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Dec 17 14:16:57.809: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 17 14:16:57.809: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Dec 17 14:16:57.819: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Dec 17 14:16:57.819: INFO: 	Container etcd ready: true, restart count 0
Dec 17 14:16:57.819: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Dec 17 14:16:57.819: INFO: 	Container weave ready: true, restart count 0
Dec 17 14:16:57.819: INFO: 	Container weave-npc ready: true, restart count 0
Dec 17 14:16:57.819: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Dec 17 14:16:57.819: INFO: 	Container coredns ready: true, restart count 0
Dec 17 14:16:57.819: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Dec 17 14:16:57.819: INFO: 	Container kube-controller-manager ready: true, restart count 10
Dec 17 14:16:57.819: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Dec 17 14:16:57.819: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 17 14:16:57.819: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Dec 17 14:16:57.819: INFO: 	Container kube-apiserver ready: true, restart count 0
Dec 17 14:16:57.819: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Dec 17 14:16:57.819: INFO: 	Container kube-scheduler ready: true, restart count 7
Dec 17 14:16:57.819: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Dec 17 14:16:57.819: INFO: 	Container coredns ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-365b721b-d962-4a27-9c71-12f60557aa76 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-365b721b-d962-4a27-9c71-12f60557aa76 off the node iruya-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-365b721b-d962-4a27-9c71-12f60557aa76
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:17:18.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-4668" for this suite.
Dec 17 14:17:32.210: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:17:32.320: INFO: namespace sched-pred-4668 deletion completed in 14.133140001s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:34.659 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:17:32.321: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-189ae547-ba58-42d8-8a7d-510bf2f0cac6
STEP: Creating a pod to test consume secrets
Dec 17 14:17:32.456: INFO: Waiting up to 5m0s for pod "pod-secrets-84348d85-3914-4470-878d-992333a38ff2" in namespace "secrets-3950" to be "success or failure"
Dec 17 14:17:32.466: INFO: Pod "pod-secrets-84348d85-3914-4470-878d-992333a38ff2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.072786ms
Dec 17 14:17:34.486: INFO: Pod "pod-secrets-84348d85-3914-4470-878d-992333a38ff2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029884987s
Dec 17 14:17:36.504: INFO: Pod "pod-secrets-84348d85-3914-4470-878d-992333a38ff2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047749803s
Dec 17 14:17:38.523: INFO: Pod "pod-secrets-84348d85-3914-4470-878d-992333a38ff2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066571818s
Dec 17 14:17:40.553: INFO: Pod "pod-secrets-84348d85-3914-4470-878d-992333a38ff2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.0965671s
STEP: Saw pod success
Dec 17 14:17:40.554: INFO: Pod "pod-secrets-84348d85-3914-4470-878d-992333a38ff2" satisfied condition "success or failure"
Dec 17 14:17:40.561: INFO: Trying to get logs from node iruya-node pod pod-secrets-84348d85-3914-4470-878d-992333a38ff2 container secret-volume-test: 
STEP: delete the pod
Dec 17 14:17:40.745: INFO: Waiting for pod pod-secrets-84348d85-3914-4470-878d-992333a38ff2 to disappear
Dec 17 14:17:40.834: INFO: Pod pod-secrets-84348d85-3914-4470-878d-992333a38ff2 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:17:40.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3950" for this suite.
Dec 17 14:17:46.893: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:17:47.049: INFO: namespace secrets-3950 deletion completed in 6.205605021s

• [SLOW TEST:14.729 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:17:47.050: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Dec 17 14:17:47.134: INFO: Waiting up to 5m0s for pod "downward-api-70e57470-0437-44da-94bd-c93e023c4023" in namespace "downward-api-6263" to be "success or failure"
Dec 17 14:17:47.194: INFO: Pod "downward-api-70e57470-0437-44da-94bd-c93e023c4023": Phase="Pending", Reason="", readiness=false. Elapsed: 59.501705ms
Dec 17 14:17:49.206: INFO: Pod "downward-api-70e57470-0437-44da-94bd-c93e023c4023": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07188238s
Dec 17 14:17:52.147: INFO: Pod "downward-api-70e57470-0437-44da-94bd-c93e023c4023": Phase="Pending", Reason="", readiness=false. Elapsed: 5.012670088s
Dec 17 14:17:54.155: INFO: Pod "downward-api-70e57470-0437-44da-94bd-c93e023c4023": Phase="Pending", Reason="", readiness=false. Elapsed: 7.020819128s
Dec 17 14:17:56.166: INFO: Pod "downward-api-70e57470-0437-44da-94bd-c93e023c4023": Phase="Running", Reason="", readiness=true. Elapsed: 9.032030702s
Dec 17 14:17:58.176: INFO: Pod "downward-api-70e57470-0437-44da-94bd-c93e023c4023": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.041898131s
STEP: Saw pod success
Dec 17 14:17:58.176: INFO: Pod "downward-api-70e57470-0437-44da-94bd-c93e023c4023" satisfied condition "success or failure"
Dec 17 14:17:58.182: INFO: Trying to get logs from node iruya-node pod downward-api-70e57470-0437-44da-94bd-c93e023c4023 container dapi-container: 
STEP: delete the pod
Dec 17 14:17:58.352: INFO: Waiting for pod downward-api-70e57470-0437-44da-94bd-c93e023c4023 to disappear
Dec 17 14:17:58.373: INFO: Pod downward-api-70e57470-0437-44da-94bd-c93e023c4023 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:17:58.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6263" for this suite.
Dec 17 14:18:04.414: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:18:04.523: INFO: namespace downward-api-6263 deletion completed in 6.131520231s

• [SLOW TEST:17.473 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:18:04.525: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service endpoint-test2 in namespace services-4471
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4471 to expose endpoints map[]
Dec 17 14:18:04.802: INFO: Get endpoints failed (83.727071ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Dec 17 14:18:05.813: INFO: successfully validated that service endpoint-test2 in namespace services-4471 exposes endpoints map[] (1.094838019s elapsed)
STEP: Creating pod pod1 in namespace services-4471
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4471 to expose endpoints map[pod1:[80]]
Dec 17 14:18:09.958: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.11941431s elapsed, will retry)
Dec 17 14:18:14.015: INFO: successfully validated that service endpoint-test2 in namespace services-4471 exposes endpoints map[pod1:[80]] (8.176492612s elapsed)
STEP: Creating pod pod2 in namespace services-4471
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4471 to expose endpoints map[pod1:[80] pod2:[80]]
Dec 17 14:18:18.649: INFO: Unexpected endpoints: found map[b09b0e88-de0b-4ddc-b4fa-0213ebb32621:[80]], expected map[pod1:[80] pod2:[80]] (4.62470412s elapsed, will retry)
Dec 17 14:18:21.792: INFO: successfully validated that service endpoint-test2 in namespace services-4471 exposes endpoints map[pod1:[80] pod2:[80]] (7.768192946s elapsed)
STEP: Deleting pod pod1 in namespace services-4471
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4471 to expose endpoints map[pod2:[80]]
Dec 17 14:18:22.891: INFO: successfully validated that service endpoint-test2 in namespace services-4471 exposes endpoints map[pod2:[80]] (1.090407583s elapsed)
STEP: Deleting pod pod2 in namespace services-4471
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4471 to expose endpoints map[]
Dec 17 14:18:22.977: INFO: successfully validated that service endpoint-test2 in namespace services-4471 exposes endpoints map[] (67.254871ms elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:18:23.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4471" for this suite.
Dec 17 14:18:46.537: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:18:46.692: INFO: namespace services-4471 deletion completed in 23.55831538s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:42.168 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:18:46.693: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:18:59.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-4077" for this suite.
Dec 17 14:19:21.925: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:19:22.056: INFO: namespace replication-controller-4077 deletion completed in 22.156356534s

• [SLOW TEST:35.364 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:19:22.057: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-projected-all-test-volume-9968d04d-1b2c-422f-9c3b-b6742063d5c4
STEP: Creating secret with name secret-projected-all-test-volume-0902693f-cd88-4a0d-a1b2-5df439308040
STEP: Creating a pod to test Check all projections for projected volume plugin
Dec 17 14:19:22.241: INFO: Waiting up to 5m0s for pod "projected-volume-8769e5da-356e-4810-bcda-116e88829dbf" in namespace "projected-5806" to be "success or failure"
Dec 17 14:19:22.358: INFO: Pod "projected-volume-8769e5da-356e-4810-bcda-116e88829dbf": Phase="Pending", Reason="", readiness=false. Elapsed: 116.3587ms
Dec 17 14:19:24.371: INFO: Pod "projected-volume-8769e5da-356e-4810-bcda-116e88829dbf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.129558972s
Dec 17 14:19:26.384: INFO: Pod "projected-volume-8769e5da-356e-4810-bcda-116e88829dbf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.142293727s
Dec 17 14:19:28.397: INFO: Pod "projected-volume-8769e5da-356e-4810-bcda-116e88829dbf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.155963998s
Dec 17 14:19:30.407: INFO: Pod "projected-volume-8769e5da-356e-4810-bcda-116e88829dbf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.165360418s
Dec 17 14:19:32.420: INFO: Pod "projected-volume-8769e5da-356e-4810-bcda-116e88829dbf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.178525115s
STEP: Saw pod success
Dec 17 14:19:32.420: INFO: Pod "projected-volume-8769e5da-356e-4810-bcda-116e88829dbf" satisfied condition "success or failure"
Dec 17 14:19:32.426: INFO: Trying to get logs from node iruya-node pod projected-volume-8769e5da-356e-4810-bcda-116e88829dbf container projected-all-volume-test: 
STEP: delete the pod
Dec 17 14:19:32.560: INFO: Waiting for pod projected-volume-8769e5da-356e-4810-bcda-116e88829dbf to disappear
Dec 17 14:19:32.589: INFO: Pod projected-volume-8769e5da-356e-4810-bcda-116e88829dbf no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:19:32.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5806" for this suite.
Dec 17 14:19:38.645: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:19:38.794: INFO: namespace projected-5806 deletion completed in 6.184649799s

• [SLOW TEST:16.737 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:19:38.795: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 17 14:19:38.878: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-8948'
Dec 17 14:19:39.069: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 17 14:19:39.070: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426
Dec 17 14:19:39.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-8948'
Dec 17 14:19:39.231: INFO: stderr: ""
Dec 17 14:19:39.231: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:19:39.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8948" for this suite.
Dec 17 14:19:45.258: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:19:45.408: INFO: namespace kubectl-8948 deletion completed in 6.169664068s

• [SLOW TEST:6.614 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:19:45.409: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-4b4c71df-2130-4489-8b32-34f97f99f012
STEP: Creating a pod to test consume secrets
Dec 17 14:19:45.649: INFO: Waiting up to 5m0s for pod "pod-secrets-42c8c5ce-12fb-4716-996f-e956f7cd3668" in namespace "secrets-7248" to be "success or failure"
Dec 17 14:19:45.671: INFO: Pod "pod-secrets-42c8c5ce-12fb-4716-996f-e956f7cd3668": Phase="Pending", Reason="", readiness=false. Elapsed: 22.154882ms
Dec 17 14:19:47.685: INFO: Pod "pod-secrets-42c8c5ce-12fb-4716-996f-e956f7cd3668": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035775405s
Dec 17 14:19:49.694: INFO: Pod "pod-secrets-42c8c5ce-12fb-4716-996f-e956f7cd3668": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045250953s
Dec 17 14:19:51.711: INFO: Pod "pod-secrets-42c8c5ce-12fb-4716-996f-e956f7cd3668": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062022963s
Dec 17 14:19:53.724: INFO: Pod "pod-secrets-42c8c5ce-12fb-4716-996f-e956f7cd3668": Phase="Pending", Reason="", readiness=false. Elapsed: 8.075248189s
Dec 17 14:19:56.517: INFO: Pod "pod-secrets-42c8c5ce-12fb-4716-996f-e956f7cd3668": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.868111366s
STEP: Saw pod success
Dec 17 14:19:56.517: INFO: Pod "pod-secrets-42c8c5ce-12fb-4716-996f-e956f7cd3668" satisfied condition "success or failure"
Dec 17 14:19:56.531: INFO: Trying to get logs from node iruya-node pod pod-secrets-42c8c5ce-12fb-4716-996f-e956f7cd3668 container secret-volume-test: 
STEP: delete the pod
Dec 17 14:19:56.616: INFO: Waiting for pod pod-secrets-42c8c5ce-12fb-4716-996f-e956f7cd3668 to disappear
Dec 17 14:19:56.660: INFO: Pod pod-secrets-42c8c5ce-12fb-4716-996f-e956f7cd3668 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:19:56.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7248" for this suite.
Dec 17 14:20:02.719: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:20:02.881: INFO: namespace secrets-7248 deletion completed in 6.209532769s
STEP: Destroying namespace "secret-namespace-1598" for this suite.
Dec 17 14:20:08.935: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:20:09.088: INFO: namespace secret-namespace-1598 deletion completed in 6.207259036s

• [SLOW TEST:23.679 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:20:09.088: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating all guestbook components
Dec 17 14:20:09.201: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Dec 17 14:20:09.201: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3522'
Dec 17 14:20:09.600: INFO: stderr: ""
Dec 17 14:20:09.601: INFO: stdout: "service/redis-slave created\n"
Dec 17 14:20:09.601: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Dec 17 14:20:09.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3522'
Dec 17 14:20:10.004: INFO: stderr: ""
Dec 17 14:20:10.004: INFO: stdout: "service/redis-master created\n"
Dec 17 14:20:10.005: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Dec 17 14:20:10.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3522'
Dec 17 14:20:10.595: INFO: stderr: ""
Dec 17 14:20:10.596: INFO: stdout: "service/frontend created\n"
Dec 17 14:20:10.596: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Dec 17 14:20:10.597: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3522'
Dec 17 14:20:11.059: INFO: stderr: ""
Dec 17 14:20:11.059: INFO: stdout: "deployment.apps/frontend created\n"
Dec 17 14:20:11.060: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Dec 17 14:20:11.060: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3522'
Dec 17 14:20:11.490: INFO: stderr: ""
Dec 17 14:20:11.491: INFO: stdout: "deployment.apps/redis-master created\n"
Dec 17 14:20:11.492: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: redis
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Dec 17 14:20:11.492: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3522'
Dec 17 14:20:12.047: INFO: stderr: ""
Dec 17 14:20:12.047: INFO: stdout: "deployment.apps/redis-slave created\n"
STEP: validating guestbook app
Dec 17 14:20:12.047: INFO: Waiting for all frontend pods to be Running.
Dec 17 14:20:37.101: INFO: Waiting for frontend to serve content.
Dec 17 14:20:37.384: INFO: Trying to add a new entry to the guestbook.
Dec 17 14:20:37.466: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Dec 17 14:20:37.565: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3522'
Dec 17 14:20:37.732: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 17 14:20:37.732: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Dec 17 14:20:37.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3522'
Dec 17 14:20:37.995: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 17 14:20:37.995: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Dec 17 14:20:37.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3522'
Dec 17 14:20:38.280: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 17 14:20:38.280: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Dec 17 14:20:38.281: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3522'
Dec 17 14:20:38.392: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 17 14:20:38.392: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Dec 17 14:20:38.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3522'
Dec 17 14:20:38.529: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 17 14:20:38.529: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Dec 17 14:20:38.530: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3522'
Dec 17 14:20:38.962: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 17 14:20:38.963: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:20:38.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3522" for this suite.
Dec 17 14:21:31.189: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:21:31.322: INFO: namespace kubectl-3522 deletion completed in 52.277550452s

• [SLOW TEST:82.234 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:21:31.323: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: executing a command with run --rm and attach with stdin
Dec 17 14:21:31.396: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3976 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Dec 17 14:21:40.003: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\n"
Dec 17 14:21:40.003: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:21:42.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3976" for this suite.
Dec 17 14:21:48.101: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:21:48.242: INFO: namespace kubectl-3976 deletion completed in 6.22247169s

• [SLOW TEST:16.920 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:21:48.244: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Dec 17 14:21:57.957: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:21:59.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-9389" for this suite.
Dec 17 14:22:23.101: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:22:23.224: INFO: namespace replicaset-9389 deletion completed in 24.145309408s

• [SLOW TEST:34.980 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:22:23.224: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Starting the proxy
Dec 17 14:22:23.350: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix644546087/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:22:23.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1951" for this suite.
Dec 17 14:22:29.463: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:22:29.553: INFO: namespace kubectl-1951 deletion completed in 6.111707022s

• [SLOW TEST:6.329 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:22:29.553: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Dec 17 14:22:29.694: INFO: Waiting up to 5m0s for pod "pod-e48a3d24-74b6-4420-a08f-1ab391dc485a" in namespace "emptydir-4396" to be "success or failure"
Dec 17 14:22:29.713: INFO: Pod "pod-e48a3d24-74b6-4420-a08f-1ab391dc485a": Phase="Pending", Reason="", readiness=false. Elapsed: 18.403392ms
Dec 17 14:22:31.719: INFO: Pod "pod-e48a3d24-74b6-4420-a08f-1ab391dc485a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024982065s
Dec 17 14:22:33.730: INFO: Pod "pod-e48a3d24-74b6-4420-a08f-1ab391dc485a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035322452s
Dec 17 14:22:35.778: INFO: Pod "pod-e48a3d24-74b6-4420-a08f-1ab391dc485a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.083409155s
Dec 17 14:22:37.825: INFO: Pod "pod-e48a3d24-74b6-4420-a08f-1ab391dc485a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.130966351s
Dec 17 14:22:39.837: INFO: Pod "pod-e48a3d24-74b6-4420-a08f-1ab391dc485a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.142893013s
STEP: Saw pod success
Dec 17 14:22:39.838: INFO: Pod "pod-e48a3d24-74b6-4420-a08f-1ab391dc485a" satisfied condition "success or failure"
Dec 17 14:22:39.870: INFO: Trying to get logs from node iruya-node pod pod-e48a3d24-74b6-4420-a08f-1ab391dc485a container test-container: 
STEP: delete the pod
Dec 17 14:22:39.948: INFO: Waiting for pod pod-e48a3d24-74b6-4420-a08f-1ab391dc485a to disappear
Dec 17 14:22:39.956: INFO: Pod pod-e48a3d24-74b6-4420-a08f-1ab391dc485a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:22:39.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4396" for this suite.
Dec 17 14:22:46.070: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:22:46.195: INFO: namespace emptydir-4396 deletion completed in 6.231349208s

• [SLOW TEST:16.641 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:22:46.195: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-secret-76gj
STEP: Creating a pod to test atomic-volume-subpath
Dec 17 14:22:46.329: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-76gj" in namespace "subpath-3578" to be "success or failure"
Dec 17 14:22:46.336: INFO: Pod "pod-subpath-test-secret-76gj": Phase="Pending", Reason="", readiness=false. Elapsed: 7.614642ms
Dec 17 14:22:48.346: INFO: Pod "pod-subpath-test-secret-76gj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017056995s
Dec 17 14:22:50.354: INFO: Pod "pod-subpath-test-secret-76gj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025309692s
Dec 17 14:22:52.360: INFO: Pod "pod-subpath-test-secret-76gj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030927755s
Dec 17 14:22:54.373: INFO: Pod "pod-subpath-test-secret-76gj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.044122791s
Dec 17 14:22:56.384: INFO: Pod "pod-subpath-test-secret-76gj": Phase="Running", Reason="", readiness=true. Elapsed: 10.055634582s
Dec 17 14:22:58.400: INFO: Pod "pod-subpath-test-secret-76gj": Phase="Running", Reason="", readiness=true. Elapsed: 12.071286537s
Dec 17 14:23:00.410: INFO: Pod "pod-subpath-test-secret-76gj": Phase="Running", Reason="", readiness=true. Elapsed: 14.081601144s
Dec 17 14:23:02.420: INFO: Pod "pod-subpath-test-secret-76gj": Phase="Running", Reason="", readiness=true. Elapsed: 16.091721188s
Dec 17 14:23:04.437: INFO: Pod "pod-subpath-test-secret-76gj": Phase="Running", Reason="", readiness=true. Elapsed: 18.108361706s
Dec 17 14:23:06.448: INFO: Pod "pod-subpath-test-secret-76gj": Phase="Running", Reason="", readiness=true. Elapsed: 20.119600806s
Dec 17 14:23:08.462: INFO: Pod "pod-subpath-test-secret-76gj": Phase="Running", Reason="", readiness=true. Elapsed: 22.132770411s
Dec 17 14:23:10.476: INFO: Pod "pod-subpath-test-secret-76gj": Phase="Running", Reason="", readiness=true. Elapsed: 24.147089731s
Dec 17 14:23:12.490: INFO: Pod "pod-subpath-test-secret-76gj": Phase="Running", Reason="", readiness=true. Elapsed: 26.160884675s
Dec 17 14:23:14.506: INFO: Pod "pod-subpath-test-secret-76gj": Phase="Running", Reason="", readiness=true. Elapsed: 28.177105246s
Dec 17 14:23:16.521: INFO: Pod "pod-subpath-test-secret-76gj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.19191695s
STEP: Saw pod success
Dec 17 14:23:16.521: INFO: Pod "pod-subpath-test-secret-76gj" satisfied condition "success or failure"
Dec 17 14:23:16.532: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-secret-76gj container test-container-subpath-secret-76gj: 
STEP: delete the pod
Dec 17 14:23:16.946: INFO: Waiting for pod pod-subpath-test-secret-76gj to disappear
Dec 17 14:23:16.966: INFO: Pod pod-subpath-test-secret-76gj no longer exists
STEP: Deleting pod pod-subpath-test-secret-76gj
Dec 17 14:23:16.967: INFO: Deleting pod "pod-subpath-test-secret-76gj" in namespace "subpath-3578"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:23:16.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-3578" for this suite.
Dec 17 14:23:22.997: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:23:23.146: INFO: namespace subpath-3578 deletion completed in 6.170878298s

• [SLOW TEST:36.951 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:23:23.146: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Dec 17 14:23:23.203: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:23:42.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-4259" for this suite.
Dec 17 14:24:04.228: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:24:04.332: INFO: namespace init-container-4259 deletion completed in 22.123127683s

• [SLOW TEST:41.186 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:24:04.333: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6389.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-6389.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6389.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6389.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-6389.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6389.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 17 14:24:16.657: INFO: Unable to read wheezy_udp@PodARecord from pod dns-6389/dns-test-395e5773-6237-4b4f-9afc-71aaf98710a4: the server could not find the requested resource (get pods dns-test-395e5773-6237-4b4f-9afc-71aaf98710a4)
Dec 17 14:24:16.663: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-6389/dns-test-395e5773-6237-4b4f-9afc-71aaf98710a4: the server could not find the requested resource (get pods dns-test-395e5773-6237-4b4f-9afc-71aaf98710a4)
Dec 17 14:24:16.669: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-6389.svc.cluster.local from pod dns-6389/dns-test-395e5773-6237-4b4f-9afc-71aaf98710a4: the server could not find the requested resource (get pods dns-test-395e5773-6237-4b4f-9afc-71aaf98710a4)
Dec 17 14:24:16.675: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-6389/dns-test-395e5773-6237-4b4f-9afc-71aaf98710a4: the server could not find the requested resource (get pods dns-test-395e5773-6237-4b4f-9afc-71aaf98710a4)
Dec 17 14:24:16.679: INFO: Unable to read jessie_udp@PodARecord from pod dns-6389/dns-test-395e5773-6237-4b4f-9afc-71aaf98710a4: the server could not find the requested resource (get pods dns-test-395e5773-6237-4b4f-9afc-71aaf98710a4)
Dec 17 14:24:16.686: INFO: Unable to read jessie_tcp@PodARecord from pod dns-6389/dns-test-395e5773-6237-4b4f-9afc-71aaf98710a4: the server could not find the requested resource (get pods dns-test-395e5773-6237-4b4f-9afc-71aaf98710a4)
Dec 17 14:24:16.686: INFO: Lookups using dns-6389/dns-test-395e5773-6237-4b4f-9afc-71aaf98710a4 failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-6389.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Dec 17 14:24:21.743: INFO: DNS probes using dns-6389/dns-test-395e5773-6237-4b4f-9afc-71aaf98710a4 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:24:21.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6389" for this suite.
Dec 17 14:24:27.973: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:24:28.102: INFO: namespace dns-6389 deletion completed in 6.181493862s

• [SLOW TEST:23.770 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:24:28.103: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 17 14:24:28.260: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0985f6c7-4ec6-4664-ba51-7c32b72bf6da" in namespace "projected-2932" to be "success or failure"
Dec 17 14:24:28.271: INFO: Pod "downwardapi-volume-0985f6c7-4ec6-4664-ba51-7c32b72bf6da": Phase="Pending", Reason="", readiness=false. Elapsed: 11.33845ms
Dec 17 14:24:30.282: INFO: Pod "downwardapi-volume-0985f6c7-4ec6-4664-ba51-7c32b72bf6da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022698855s
Dec 17 14:24:32.288: INFO: Pod "downwardapi-volume-0985f6c7-4ec6-4664-ba51-7c32b72bf6da": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028044525s
Dec 17 14:24:34.300: INFO: Pod "downwardapi-volume-0985f6c7-4ec6-4664-ba51-7c32b72bf6da": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039858185s
Dec 17 14:24:36.313: INFO: Pod "downwardapi-volume-0985f6c7-4ec6-4664-ba51-7c32b72bf6da": Phase="Pending", Reason="", readiness=false. Elapsed: 8.053010572s
Dec 17 14:24:38.327: INFO: Pod "downwardapi-volume-0985f6c7-4ec6-4664-ba51-7c32b72bf6da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.066780746s
STEP: Saw pod success
Dec 17 14:24:38.327: INFO: Pod "downwardapi-volume-0985f6c7-4ec6-4664-ba51-7c32b72bf6da" satisfied condition "success or failure"
Dec 17 14:24:38.334: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-0985f6c7-4ec6-4664-ba51-7c32b72bf6da container client-container: 
STEP: delete the pod
Dec 17 14:24:38.405: INFO: Waiting for pod downwardapi-volume-0985f6c7-4ec6-4664-ba51-7c32b72bf6da to disappear
Dec 17 14:24:38.421: INFO: Pod downwardapi-volume-0985f6c7-4ec6-4664-ba51-7c32b72bf6da no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:24:38.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2932" for this suite.
Dec 17 14:24:44.544: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:24:44.675: INFO: namespace projected-2932 deletion completed in 6.173610089s

• [SLOW TEST:16.572 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:24:44.676: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:24:52.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-954" for this suite.
Dec 17 14:25:34.921: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:25:35.036: INFO: namespace kubelet-test-954 deletion completed in 42.135654391s

• [SLOW TEST:50.360 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:25:35.037: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Dec 17 14:25:35.168: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:25:48.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-1186" for this suite.
Dec 17 14:25:54.854: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:25:54.966: INFO: namespace init-container-1186 deletion completed in 6.143108283s

• [SLOW TEST:19.929 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:25:54.966: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 17 14:25:55.129: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Dec 17 14:25:55.329: INFO: Number of nodes with available pods: 0
Dec 17 14:25:55.330: INFO: Node iruya-node is running more than one daemon pod
Dec 17 14:25:56.477: INFO: Number of nodes with available pods: 0
Dec 17 14:25:56.478: INFO: Node iruya-node is running more than one daemon pod
Dec 17 14:25:57.680: INFO: Number of nodes with available pods: 0
Dec 17 14:25:57.680: INFO: Node iruya-node is running more than one daemon pod
Dec 17 14:25:58.379: INFO: Number of nodes with available pods: 0
Dec 17 14:25:58.379: INFO: Node iruya-node is running more than one daemon pod
Dec 17 14:25:59.354: INFO: Number of nodes with available pods: 0
Dec 17 14:25:59.354: INFO: Node iruya-node is running more than one daemon pod
Dec 17 14:26:01.152: INFO: Number of nodes with available pods: 0
Dec 17 14:26:01.152: INFO: Node iruya-node is running more than one daemon pod
Dec 17 14:26:01.516: INFO: Number of nodes with available pods: 0
Dec 17 14:26:01.516: INFO: Node iruya-node is running more than one daemon pod
Dec 17 14:26:02.513: INFO: Number of nodes with available pods: 0
Dec 17 14:26:02.513: INFO: Node iruya-node is running more than one daemon pod
Dec 17 14:26:03.348: INFO: Number of nodes with available pods: 0
Dec 17 14:26:03.348: INFO: Node iruya-node is running more than one daemon pod
Dec 17 14:26:04.349: INFO: Number of nodes with available pods: 0
Dec 17 14:26:04.349: INFO: Node iruya-node is running more than one daemon pod
Dec 17 14:26:05.348: INFO: Number of nodes with available pods: 1
Dec 17 14:26:05.348: INFO: Node iruya-node is running more than one daemon pod
Dec 17 14:26:06.348: INFO: Number of nodes with available pods: 2
Dec 17 14:26:06.348: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Dec 17 14:26:06.405: INFO: Wrong image for pod: daemon-set-g48hk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 17 14:26:06.405: INFO: Wrong image for pod: daemon-set-j4h6q. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 17 14:26:07.948: INFO: Wrong image for pod: daemon-set-g48hk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 17 14:26:07.948: INFO: Wrong image for pod: daemon-set-j4h6q. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 17 14:26:08.510: INFO: Wrong image for pod: daemon-set-g48hk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 17 14:26:08.510: INFO: Wrong image for pod: daemon-set-j4h6q. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 17 14:26:09.695: INFO: Wrong image for pod: daemon-set-g48hk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 17 14:26:09.695: INFO: Wrong image for pod: daemon-set-j4h6q. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 17 14:26:10.502: INFO: Wrong image for pod: daemon-set-g48hk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 17 14:26:10.502: INFO: Wrong image for pod: daemon-set-j4h6q. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 17 14:26:11.498: INFO: Wrong image for pod: daemon-set-g48hk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 17 14:26:11.498: INFO: Wrong image for pod: daemon-set-j4h6q. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 17 14:26:11.498: INFO: Pod daemon-set-j4h6q is not available
Dec 17 14:26:12.500: INFO: Wrong image for pod: daemon-set-g48hk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 17 14:26:12.500: INFO: Pod daemon-set-htxbx is not available
Dec 17 14:26:13.497: INFO: Wrong image for pod: daemon-set-g48hk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 17 14:26:13.498: INFO: Pod daemon-set-htxbx is not available
Dec 17 14:26:14.500: INFO: Wrong image for pod: daemon-set-g48hk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 17 14:26:14.500: INFO: Pod daemon-set-htxbx is not available
Dec 17 14:26:15.496: INFO: Wrong image for pod: daemon-set-g48hk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 17 14:26:15.496: INFO: Pod daemon-set-htxbx is not available
Dec 17 14:26:16.685: INFO: Wrong image for pod: daemon-set-g48hk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 17 14:26:16.685: INFO: Pod daemon-set-htxbx is not available
Dec 17 14:26:17.497: INFO: Wrong image for pod: daemon-set-g48hk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 17 14:26:17.497: INFO: Pod daemon-set-htxbx is not available
Dec 17 14:26:18.502: INFO: Wrong image for pod: daemon-set-g48hk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 17 14:26:18.502: INFO: Pod daemon-set-htxbx is not available
Dec 17 14:26:19.504: INFO: Wrong image for pod: daemon-set-g48hk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 17 14:26:19.504: INFO: Pod daemon-set-htxbx is not available
Dec 17 14:26:20.503: INFO: Wrong image for pod: daemon-set-g48hk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 17 14:26:21.496: INFO: Wrong image for pod: daemon-set-g48hk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 17 14:26:22.499: INFO: Wrong image for pod: daemon-set-g48hk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 17 14:26:23.503: INFO: Wrong image for pod: daemon-set-g48hk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 17 14:26:24.498: INFO: Wrong image for pod: daemon-set-g48hk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 17 14:26:24.498: INFO: Pod daemon-set-g48hk is not available
Dec 17 14:26:25.518: INFO: Pod daemon-set-545f9 is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Dec 17 14:26:25.542: INFO: Number of nodes with available pods: 1
Dec 17 14:26:25.542: INFO: Node iruya-node is running more than one daemon pod
Dec 17 14:26:26.592: INFO: Number of nodes with available pods: 1
Dec 17 14:26:26.592: INFO: Node iruya-node is running more than one daemon pod
Dec 17 14:26:27.576: INFO: Number of nodes with available pods: 1
Dec 17 14:26:27.577: INFO: Node iruya-node is running more than one daemon pod
Dec 17 14:26:28.576: INFO: Number of nodes with available pods: 1
Dec 17 14:26:28.576: INFO: Node iruya-node is running more than one daemon pod
Dec 17 14:26:29.563: INFO: Number of nodes with available pods: 1
Dec 17 14:26:29.563: INFO: Node iruya-node is running more than one daemon pod
Dec 17 14:26:30.565: INFO: Number of nodes with available pods: 1
Dec 17 14:26:30.565: INFO: Node iruya-node is running more than one daemon pod
Dec 17 14:26:31.573: INFO: Number of nodes with available pods: 1
Dec 17 14:26:31.573: INFO: Node iruya-node is running more than one daemon pod
Dec 17 14:26:32.574: INFO: Number of nodes with available pods: 1
Dec 17 14:26:32.574: INFO: Node iruya-node is running more than one daemon pod
Dec 17 14:26:33.579: INFO: Number of nodes with available pods: 2
Dec 17 14:26:33.579: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8646, will wait for the garbage collector to delete the pods
Dec 17 14:26:33.673: INFO: Deleting DaemonSet.extensions daemon-set took: 19.164407ms
Dec 17 14:26:34.074: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.82063ms
Dec 17 14:26:41.481: INFO: Number of nodes with available pods: 0
Dec 17 14:26:41.481: INFO: Number of running nodes: 0, number of available pods: 0
Dec 17 14:26:41.485: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8646/daemonsets","resourceVersion":"17023278"},"items":null}

Dec 17 14:26:41.488: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8646/pods","resourceVersion":"17023278"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:26:41.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-8646" for this suite.
Dec 17 14:26:47.532: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:26:47.714: INFO: namespace daemonsets-8646 deletion completed in 6.203185568s

• [SLOW TEST:52.748 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:26:47.714: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:27:47.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5196" for this suite.
Dec 17 14:28:09.959: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:28:10.082: INFO: namespace container-probe-5196 deletion completed in 22.236776248s

• [SLOW TEST:82.368 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:28:10.083: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 17 14:28:10.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Dec 17 14:28:10.454: INFO: stderr: ""
Dec 17 14:28:10.455: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2019-12-14T21:37:43Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.1\", GitCommit:\"4485c6f18cee9a5d3c3b4e523bd27972b1b53892\", GitTreeState:\"clean\", BuildDate:\"2019-07-18T09:09:21Z\", GoVersion:\"go1.12.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:28:10.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-685" for this suite.
Dec 17 14:28:16.499: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:28:16.856: INFO: namespace kubectl-685 deletion completed in 6.387918675s

• [SLOW TEST:6.773 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:28:16.856: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W1217 14:28:47.534657       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 17 14:28:47.534: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:28:47.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-9959" for this suite.
Dec 17 14:28:55.798: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:28:55.937: INFO: namespace gc-9959 deletion completed in 8.399118265s

• [SLOW TEST:39.081 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:28:55.938: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 17 14:28:56.097: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:29:08.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3842" for this suite.
Dec 17 14:29:50.723: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:29:50.945: INFO: namespace pods-3842 deletion completed in 42.280025445s

• [SLOW TEST:55.007 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:29:50.946: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-b8026b93-af54-4129-873e-8db5042e2583
STEP: Creating secret with name s-test-opt-upd-d592615e-8d56-4d79-8faf-befaaed0dfe7
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-b8026b93-af54-4129-873e-8db5042e2583
STEP: Updating secret s-test-opt-upd-d592615e-8d56-4d79-8faf-befaaed0dfe7
STEP: Creating secret with name s-test-opt-create-cf7ce469-2bb4-494a-a81b-cbc45c4dad44
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:31:25.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9705" for this suite.
Dec 17 14:31:53.274: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:31:53.356: INFO: namespace projected-9705 deletion completed in 28.101564249s

• [SLOW TEST:122.410 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:31:53.356: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Dec 17 14:31:56.588: INFO: Pod name wrapped-volume-race-afe419d1-2ab0-42f2-b8ea-11de4ee10f47: Found 0 pods out of 5
Dec 17 14:32:01.632: INFO: Pod name wrapped-volume-race-afe419d1-2ab0-42f2-b8ea-11de4ee10f47: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-afe419d1-2ab0-42f2-b8ea-11de4ee10f47 in namespace emptydir-wrapper-3457, will wait for the garbage collector to delete the pods
Dec 17 14:32:48.171: INFO: Deleting ReplicationController wrapped-volume-race-afe419d1-2ab0-42f2-b8ea-11de4ee10f47 took: 2.857562119s
Dec 17 14:32:48.677: INFO: Terminating ReplicationController wrapped-volume-race-afe419d1-2ab0-42f2-b8ea-11de4ee10f47 pods took: 506.093482ms
STEP: Creating RC which spawns configmap-volume pods
Dec 17 14:33:36.955: INFO: Pod name wrapped-volume-race-377c28f3-f5a1-47a6-a4cc-2e0dc47ea1e2: Found 0 pods out of 5
Dec 17 14:33:43.758: INFO: Pod name wrapped-volume-race-377c28f3-f5a1-47a6-a4cc-2e0dc47ea1e2: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-377c28f3-f5a1-47a6-a4cc-2e0dc47ea1e2 in namespace emptydir-wrapper-3457, will wait for the garbage collector to delete the pods
Dec 17 14:34:22.480: INFO: Deleting ReplicationController wrapped-volume-race-377c28f3-f5a1-47a6-a4cc-2e0dc47ea1e2 took: 27.562099ms
Dec 17 14:34:22.881: INFO: Terminating ReplicationController wrapped-volume-race-377c28f3-f5a1-47a6-a4cc-2e0dc47ea1e2 pods took: 400.932482ms
STEP: Creating RC which spawns configmap-volume pods
Dec 17 14:35:08.684: INFO: Pod name wrapped-volume-race-4b662c59-2370-4b70-8ec3-a7617fb2040b: Found 0 pods out of 5
Dec 17 14:35:13.711: INFO: Pod name wrapped-volume-race-4b662c59-2370-4b70-8ec3-a7617fb2040b: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-4b662c59-2370-4b70-8ec3-a7617fb2040b in namespace emptydir-wrapper-3457, will wait for the garbage collector to delete the pods
Dec 17 14:35:57.935: INFO: Deleting ReplicationController wrapped-volume-race-4b662c59-2370-4b70-8ec3-a7617fb2040b took: 19.77199ms
Dec 17 14:35:58.236: INFO: Terminating ReplicationController wrapped-volume-race-4b662c59-2370-4b70-8ec3-a7617fb2040b pods took: 300.898665ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:36:53.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-3457" for this suite.
Dec 17 14:37:04.042: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:37:04.147: INFO: namespace emptydir-wrapper-3457 deletion completed in 10.215540819s

• [SLOW TEST:310.791 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:37:04.147: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Dec 17 14:37:04.217: INFO: Waiting up to 5m0s for pod "downward-api-04c4ee25-c261-4738-acc3-829273f57b63" in namespace "downward-api-6983" to be "success or failure"
Dec 17 14:37:04.285: INFO: Pod "downward-api-04c4ee25-c261-4738-acc3-829273f57b63": Phase="Pending", Reason="", readiness=false. Elapsed: 67.855667ms
Dec 17 14:37:06.296: INFO: Pod "downward-api-04c4ee25-c261-4738-acc3-829273f57b63": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07868136s
Dec 17 14:37:08.310: INFO: Pod "downward-api-04c4ee25-c261-4738-acc3-829273f57b63": Phase="Pending", Reason="", readiness=false. Elapsed: 4.093047941s
Dec 17 14:37:10.322: INFO: Pod "downward-api-04c4ee25-c261-4738-acc3-829273f57b63": Phase="Pending", Reason="", readiness=false. Elapsed: 6.104953726s
Dec 17 14:37:12.384: INFO: Pod "downward-api-04c4ee25-c261-4738-acc3-829273f57b63": Phase="Pending", Reason="", readiness=false. Elapsed: 8.166812714s
Dec 17 14:37:14.397: INFO: Pod "downward-api-04c4ee25-c261-4738-acc3-829273f57b63": Phase="Pending", Reason="", readiness=false. Elapsed: 10.179422934s
Dec 17 14:37:16.402: INFO: Pod "downward-api-04c4ee25-c261-4738-acc3-829273f57b63": Phase="Pending", Reason="", readiness=false. Elapsed: 12.185080684s
Dec 17 14:37:18.408: INFO: Pod "downward-api-04c4ee25-c261-4738-acc3-829273f57b63": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.190884586s
STEP: Saw pod success
Dec 17 14:37:18.408: INFO: Pod "downward-api-04c4ee25-c261-4738-acc3-829273f57b63" satisfied condition "success or failure"
Dec 17 14:37:18.411: INFO: Trying to get logs from node iruya-node pod downward-api-04c4ee25-c261-4738-acc3-829273f57b63 container dapi-container: 
STEP: delete the pod
Dec 17 14:37:18.562: INFO: Waiting for pod downward-api-04c4ee25-c261-4738-acc3-829273f57b63 to disappear
Dec 17 14:37:18.589: INFO: Pod downward-api-04c4ee25-c261-4738-acc3-829273f57b63 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:37:18.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6983" for this suite.
Dec 17 14:37:24.635: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:37:24.757: INFO: namespace downward-api-6983 deletion completed in 6.14738392s

• [SLOW TEST:20.610 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:37:24.758: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
STEP: reading a file in the container
Dec 17 14:37:37.514: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9291 pod-service-account-ca4dc1be-33aa-4d04-bb67-df91280ee26f -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Dec 17 14:37:40.870: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9291 pod-service-account-ca4dc1be-33aa-4d04-bb67-df91280ee26f -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Dec 17 14:37:41.398: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9291 pod-service-account-ca4dc1be-33aa-4d04-bb67-df91280ee26f -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:37:41.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-9291" for this suite.
Dec 17 14:37:47.920: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:37:48.075: INFO: namespace svcaccounts-9291 deletion completed in 6.174174318s

• [SLOW TEST:23.317 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:37:48.076: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-c802a886-5fb3-4157-8a71-6e2b6e176041
STEP: Creating a pod to test consume configMaps
Dec 17 14:37:48.394: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e8ed6a94-dff8-44f6-b486-a20bbd457c26" in namespace "projected-6386" to be "success or failure"
Dec 17 14:37:48.400: INFO: Pod "pod-projected-configmaps-e8ed6a94-dff8-44f6-b486-a20bbd457c26": Phase="Pending", Reason="", readiness=false. Elapsed: 5.864392ms
Dec 17 14:37:50.409: INFO: Pod "pod-projected-configmaps-e8ed6a94-dff8-44f6-b486-a20bbd457c26": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014085338s
Dec 17 14:37:52.418: INFO: Pod "pod-projected-configmaps-e8ed6a94-dff8-44f6-b486-a20bbd457c26": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023934049s
Dec 17 14:37:58.757: INFO: Pod "pod-projected-configmaps-e8ed6a94-dff8-44f6-b486-a20bbd457c26": Phase="Pending", Reason="", readiness=false. Elapsed: 10.362452844s
Dec 17 14:38:00.773: INFO: Pod "pod-projected-configmaps-e8ed6a94-dff8-44f6-b486-a20bbd457c26": Phase="Pending", Reason="", readiness=false. Elapsed: 12.37816652s
Dec 17 14:38:02.781: INFO: Pod "pod-projected-configmaps-e8ed6a94-dff8-44f6-b486-a20bbd457c26": Phase="Pending", Reason="", readiness=false. Elapsed: 14.38635773s
Dec 17 14:38:04.789: INFO: Pod "pod-projected-configmaps-e8ed6a94-dff8-44f6-b486-a20bbd457c26": Phase="Pending", Reason="", readiness=false. Elapsed: 16.394907474s
Dec 17 14:38:06.795: INFO: Pod "pod-projected-configmaps-e8ed6a94-dff8-44f6-b486-a20bbd457c26": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.400043081s
STEP: Saw pod success
Dec 17 14:38:06.795: INFO: Pod "pod-projected-configmaps-e8ed6a94-dff8-44f6-b486-a20bbd457c26" satisfied condition "success or failure"
Dec 17 14:38:06.797: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-e8ed6a94-dff8-44f6-b486-a20bbd457c26 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 17 14:38:06.843: INFO: Waiting for pod pod-projected-configmaps-e8ed6a94-dff8-44f6-b486-a20bbd457c26 to disappear
Dec 17 14:38:06.858: INFO: Pod pod-projected-configmaps-e8ed6a94-dff8-44f6-b486-a20bbd457c26 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:38:06.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6386" for this suite.
Dec 17 14:38:12.972: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:38:13.079: INFO: namespace projected-6386 deletion completed in 6.158335272s

• [SLOW TEST:25.004 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:38:13.079: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Dec 17 14:38:27.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-80dde023-ca30-4611-b0e8-4d04d2aa78c7 -c busybox-main-container --namespace=emptydir-3491 -- cat /usr/share/volumeshare/shareddata.txt'
Dec 17 14:38:27.843: INFO: stderr: ""
Dec 17 14:38:27.844: INFO: stdout: "Hello from the busy-box sub-container\n"
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:38:27.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3491" for this suite.
Dec 17 14:38:35.913: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:38:37.124: INFO: namespace emptydir-3491 deletion completed in 9.272587089s

• [SLOW TEST:24.045 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:38:37.126: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:39:00.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-6258" for this suite.
Dec 17 14:39:08.819: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:39:09.191: INFO: namespace kubelet-test-6258 deletion completed in 8.59728506s

• [SLOW TEST:32.066 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:39:09.192: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-nqhk
STEP: Creating a pod to test atomic-volume-subpath
Dec 17 14:39:09.356: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-nqhk" in namespace "subpath-8344" to be "success or failure"
Dec 17 14:39:09.436: INFO: Pod "pod-subpath-test-configmap-nqhk": Phase="Pending", Reason="", readiness=false. Elapsed: 80.000789ms
Dec 17 14:39:11.444: INFO: Pod "pod-subpath-test-configmap-nqhk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087418025s
Dec 17 14:39:13.456: INFO: Pod "pod-subpath-test-configmap-nqhk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.099274162s
Dec 17 14:39:15.464: INFO: Pod "pod-subpath-test-configmap-nqhk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.107212719s
Dec 17 14:39:17.480: INFO: Pod "pod-subpath-test-configmap-nqhk": Phase="Pending", Reason="", readiness=false. Elapsed: 8.123896342s
Dec 17 14:39:19.489: INFO: Pod "pod-subpath-test-configmap-nqhk": Phase="Pending", Reason="", readiness=false. Elapsed: 10.132365848s
Dec 17 14:39:21.500: INFO: Pod "pod-subpath-test-configmap-nqhk": Phase="Running", Reason="", readiness=true. Elapsed: 12.143568767s
Dec 17 14:39:23.944: INFO: Pod "pod-subpath-test-configmap-nqhk": Phase="Running", Reason="", readiness=true. Elapsed: 14.587653819s
Dec 17 14:39:25.953: INFO: Pod "pod-subpath-test-configmap-nqhk": Phase="Running", Reason="", readiness=true. Elapsed: 16.596533093s
Dec 17 14:39:27.958: INFO: Pod "pod-subpath-test-configmap-nqhk": Phase="Running", Reason="", readiness=true. Elapsed: 18.602040536s
Dec 17 14:39:29.965: INFO: Pod "pod-subpath-test-configmap-nqhk": Phase="Running", Reason="", readiness=true. Elapsed: 20.608477396s
Dec 17 14:39:31.976: INFO: Pod "pod-subpath-test-configmap-nqhk": Phase="Running", Reason="", readiness=true. Elapsed: 22.620060024s
Dec 17 14:39:34.014: INFO: Pod "pod-subpath-test-configmap-nqhk": Phase="Running", Reason="", readiness=true. Elapsed: 24.657242762s
Dec 17 14:39:36.112: INFO: Pod "pod-subpath-test-configmap-nqhk": Phase="Running", Reason="", readiness=true. Elapsed: 26.755866221s
Dec 17 14:39:38.204: INFO: Pod "pod-subpath-test-configmap-nqhk": Phase="Running", Reason="", readiness=true. Elapsed: 28.847506422s
Dec 17 14:39:40.261: INFO: Pod "pod-subpath-test-configmap-nqhk": Phase="Running", Reason="", readiness=true. Elapsed: 30.904626284s
Dec 17 14:39:42.311: INFO: Pod "pod-subpath-test-configmap-nqhk": Phase="Running", Reason="", readiness=true. Elapsed: 32.95481709s
Dec 17 14:39:44.340: INFO: Pod "pod-subpath-test-configmap-nqhk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.984066703s
STEP: Saw pod success
Dec 17 14:39:44.341: INFO: Pod "pod-subpath-test-configmap-nqhk" satisfied condition "success or failure"
Dec 17 14:39:44.374: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-nqhk container test-container-subpath-configmap-nqhk: 
STEP: delete the pod
Dec 17 14:39:44.943: INFO: Waiting for pod pod-subpath-test-configmap-nqhk to disappear
Dec 17 14:39:44.970: INFO: Pod pod-subpath-test-configmap-nqhk no longer exists
STEP: Deleting pod pod-subpath-test-configmap-nqhk
Dec 17 14:39:44.971: INFO: Deleting pod "pod-subpath-test-configmap-nqhk" in namespace "subpath-8344"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:39:44.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-8344" for this suite.
Dec 17 14:39:55.314: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:39:57.057: INFO: namespace subpath-8344 deletion completed in 11.936684922s

• [SLOW TEST:47.865 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:39:57.058: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on node default medium
Dec 17 14:39:57.726: INFO: Waiting up to 5m0s for pod "pod-71f4dcea-7049-4171-9ba3-b97cc52c5cc5" in namespace "emptydir-94" to be "success or failure"
Dec 17 14:39:57.902: INFO: Pod "pod-71f4dcea-7049-4171-9ba3-b97cc52c5cc5": Phase="Pending", Reason="", readiness=false. Elapsed: 175.374358ms
Dec 17 14:39:59.981: INFO: Pod "pod-71f4dcea-7049-4171-9ba3-b97cc52c5cc5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.25470761s
Dec 17 14:40:02.105: INFO: Pod "pod-71f4dcea-7049-4171-9ba3-b97cc52c5cc5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.378924559s
Dec 17 14:40:04.160: INFO: Pod "pod-71f4dcea-7049-4171-9ba3-b97cc52c5cc5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.433217212s
Dec 17 14:40:06.199: INFO: Pod "pod-71f4dcea-7049-4171-9ba3-b97cc52c5cc5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.472105489s
Dec 17 14:40:08.365: INFO: Pod "pod-71f4dcea-7049-4171-9ba3-b97cc52c5cc5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.638969742s
Dec 17 14:40:10.415: INFO: Pod "pod-71f4dcea-7049-4171-9ba3-b97cc52c5cc5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.688474389s
Dec 17 14:40:12.457: INFO: Pod "pod-71f4dcea-7049-4171-9ba3-b97cc52c5cc5": Phase="Pending", Reason="", readiness=false. Elapsed: 14.730560175s
Dec 17 14:40:14.546: INFO: Pod "pod-71f4dcea-7049-4171-9ba3-b97cc52c5cc5": Phase="Pending", Reason="", readiness=false. Elapsed: 16.819516129s
Dec 17 14:40:16.573: INFO: Pod "pod-71f4dcea-7049-4171-9ba3-b97cc52c5cc5": Phase="Pending", Reason="", readiness=false. Elapsed: 18.846499327s
Dec 17 14:40:18.685: INFO: Pod "pod-71f4dcea-7049-4171-9ba3-b97cc52c5cc5": Phase="Pending", Reason="", readiness=false. Elapsed: 20.958600527s
Dec 17 14:40:20.721: INFO: Pod "pod-71f4dcea-7049-4171-9ba3-b97cc52c5cc5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.994026708s
STEP: Saw pod success
Dec 17 14:40:20.721: INFO: Pod "pod-71f4dcea-7049-4171-9ba3-b97cc52c5cc5" satisfied condition "success or failure"
Dec 17 14:40:20.796: INFO: Trying to get logs from node iruya-node pod pod-71f4dcea-7049-4171-9ba3-b97cc52c5cc5 container test-container: 
STEP: delete the pod
Dec 17 14:40:21.253: INFO: Waiting for pod pod-71f4dcea-7049-4171-9ba3-b97cc52c5cc5 to disappear
Dec 17 14:40:21.279: INFO: Pod pod-71f4dcea-7049-4171-9ba3-b97cc52c5cc5 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:40:21.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-94" for this suite.
Dec 17 14:40:31.456: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:40:32.609: INFO: namespace emptydir-94 deletion completed in 11.284779807s

• [SLOW TEST:35.552 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:40:32.611: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1090.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-1090.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1090.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-1090.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 17 14:41:12.647: INFO: File wheezy_udp@dns-test-service-3.dns-1090.svc.cluster.local from pod  dns-1090/dns-test-57e6af4b-ce60-460c-96a5-07d87bc2518b contains '' instead of 'foo.example.com.'
Dec 17 14:41:12.699: INFO: File jessie_udp@dns-test-service-3.dns-1090.svc.cluster.local from pod  dns-1090/dns-test-57e6af4b-ce60-460c-96a5-07d87bc2518b contains '' instead of 'foo.example.com.'
Dec 17 14:41:12.699: INFO: Lookups using dns-1090/dns-test-57e6af4b-ce60-460c-96a5-07d87bc2518b failed for: [wheezy_udp@dns-test-service-3.dns-1090.svc.cluster.local jessie_udp@dns-test-service-3.dns-1090.svc.cluster.local]

Dec 17 14:41:17.957: INFO: DNS probes using dns-test-57e6af4b-ce60-460c-96a5-07d87bc2518b succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1090.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-1090.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1090.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-1090.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 17 14:42:07.313: INFO: File wheezy_udp@dns-test-service-3.dns-1090.svc.cluster.local from pod  dns-1090/dns-test-9b736202-a620-4cb9-a889-1ed92f02755b contains '' instead of 'bar.example.com.'
Dec 17 14:42:07.484: INFO: File jessie_udp@dns-test-service-3.dns-1090.svc.cluster.local from pod  dns-1090/dns-test-9b736202-a620-4cb9-a889-1ed92f02755b contains '' instead of 'bar.example.com.'
Dec 17 14:42:07.484: INFO: Lookups using dns-1090/dns-test-9b736202-a620-4cb9-a889-1ed92f02755b failed for: [wheezy_udp@dns-test-service-3.dns-1090.svc.cluster.local jessie_udp@dns-test-service-3.dns-1090.svc.cluster.local]

Dec 17 14:42:12.702: INFO: File wheezy_udp@dns-test-service-3.dns-1090.svc.cluster.local from pod  dns-1090/dns-test-9b736202-a620-4cb9-a889-1ed92f02755b contains '' instead of 'bar.example.com.'
Dec 17 14:42:12.831: INFO: File jessie_udp@dns-test-service-3.dns-1090.svc.cluster.local from pod  dns-1090/dns-test-9b736202-a620-4cb9-a889-1ed92f02755b contains '' instead of 'bar.example.com.'
Dec 17 14:42:12.831: INFO: Lookups using dns-1090/dns-test-9b736202-a620-4cb9-a889-1ed92f02755b failed for: [wheezy_udp@dns-test-service-3.dns-1090.svc.cluster.local jessie_udp@dns-test-service-3.dns-1090.svc.cluster.local]

Dec 17 14:42:17.710: INFO: File wheezy_udp@dns-test-service-3.dns-1090.svc.cluster.local from pod  dns-1090/dns-test-9b736202-a620-4cb9-a889-1ed92f02755b contains '' instead of 'bar.example.com.'
Dec 17 14:42:17.895: INFO: Lookups using dns-1090/dns-test-9b736202-a620-4cb9-a889-1ed92f02755b failed for: [wheezy_udp@dns-test-service-3.dns-1090.svc.cluster.local]

Dec 17 14:42:22.632: INFO: DNS probes using dns-test-9b736202-a620-4cb9-a889-1ed92f02755b succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1090.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-1090.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1090.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-1090.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 17 14:42:53.423: INFO: File wheezy_udp@dns-test-service-3.dns-1090.svc.cluster.local from pod  dns-1090/dns-test-6452e0a7-ff32-4da1-81ca-ea7d430cc8cb contains '' instead of '10.97.194.95'
Dec 17 14:42:53.439: INFO: File jessie_udp@dns-test-service-3.dns-1090.svc.cluster.local from pod  dns-1090/dns-test-6452e0a7-ff32-4da1-81ca-ea7d430cc8cb contains '' instead of '10.97.194.95'
Dec 17 14:42:53.439: INFO: Lookups using dns-1090/dns-test-6452e0a7-ff32-4da1-81ca-ea7d430cc8cb failed for: [wheezy_udp@dns-test-service-3.dns-1090.svc.cluster.local jessie_udp@dns-test-service-3.dns-1090.svc.cluster.local]

Dec 17 14:42:58.457: INFO: DNS probes using dns-test-6452e0a7-ff32-4da1-81ca-ea7d430cc8cb succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:42:58.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-1090" for this suite.
Dec 17 14:43:06.640: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:43:06.722: INFO: namespace dns-1090 deletion completed in 8.134644147s

• [SLOW TEST:154.111 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:43:06.722: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 17 14:43:06.762: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-9743'
Dec 17 14:43:06.906: INFO: stderr: ""
Dec 17 14:43:06.906: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690
Dec 17 14:43:06.914: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-9743'
Dec 17 14:43:16.571: INFO: stderr: ""
Dec 17 14:43:16.571: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:43:16.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9743" for this suite.
Dec 17 14:43:26.789: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:43:26.851: INFO: namespace kubectl-9743 deletion completed in 10.094961374s

• [SLOW TEST:20.130 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:43:26.852: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 17 14:43:27.056: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9a452d35-a5ec-4090-b93a-2c6614216576" in namespace "projected-5757" to be "success or failure"
Dec 17 14:43:27.067: INFO: Pod "downwardapi-volume-9a452d35-a5ec-4090-b93a-2c6614216576": Phase="Pending", Reason="", readiness=false. Elapsed: 10.501198ms
Dec 17 14:43:29.076: INFO: Pod "downwardapi-volume-9a452d35-a5ec-4090-b93a-2c6614216576": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019867951s
Dec 17 14:43:31.398: INFO: Pod "downwardapi-volume-9a452d35-a5ec-4090-b93a-2c6614216576": Phase="Pending", Reason="", readiness=false. Elapsed: 4.341624642s
Dec 17 14:43:33.416: INFO: Pod "downwardapi-volume-9a452d35-a5ec-4090-b93a-2c6614216576": Phase="Pending", Reason="", readiness=false. Elapsed: 6.359810686s
Dec 17 14:43:35.423: INFO: Pod "downwardapi-volume-9a452d35-a5ec-4090-b93a-2c6614216576": Phase="Pending", Reason="", readiness=false. Elapsed: 8.366798805s
Dec 17 14:43:37.432: INFO: Pod "downwardapi-volume-9a452d35-a5ec-4090-b93a-2c6614216576": Phase="Pending", Reason="", readiness=false. Elapsed: 10.375677771s
Dec 17 14:43:39.438: INFO: Pod "downwardapi-volume-9a452d35-a5ec-4090-b93a-2c6614216576": Phase="Pending", Reason="", readiness=false. Elapsed: 12.381754738s
Dec 17 14:43:41.444: INFO: Pod "downwardapi-volume-9a452d35-a5ec-4090-b93a-2c6614216576": Phase="Pending", Reason="", readiness=false. Elapsed: 14.387660942s
Dec 17 14:43:43.459: INFO: Pod "downwardapi-volume-9a452d35-a5ec-4090-b93a-2c6614216576": Phase="Pending", Reason="", readiness=false. Elapsed: 16.402971835s
Dec 17 14:43:45.466: INFO: Pod "downwardapi-volume-9a452d35-a5ec-4090-b93a-2c6614216576": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.409261878s
STEP: Saw pod success
Dec 17 14:43:45.466: INFO: Pod "downwardapi-volume-9a452d35-a5ec-4090-b93a-2c6614216576" satisfied condition "success or failure"
Dec 17 14:43:45.469: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-9a452d35-a5ec-4090-b93a-2c6614216576 container client-container: 
STEP: delete the pod
Dec 17 14:43:45.556: INFO: Waiting for pod downwardapi-volume-9a452d35-a5ec-4090-b93a-2c6614216576 to disappear
Dec 17 14:43:45.676: INFO: Pod downwardapi-volume-9a452d35-a5ec-4090-b93a-2c6614216576 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:43:45.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5757" for this suite.
Dec 17 14:43:55.254: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:43:55.538: INFO: namespace projected-5757 deletion completed in 9.852599511s

• [SLOW TEST:28.686 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:43:55.538: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-e01c3fb4-01ba-4403-b834-e434a78d8f7a
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:44:07.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3724" for this suite.
Dec 17 14:44:29.814: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:44:29.933: INFO: namespace configmap-3724 deletion completed in 22.184073901s

• [SLOW TEST:34.395 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:44:29.934: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-14ede54f-d075-4984-8a38-0877e9891928
STEP: Creating a pod to test consume secrets
Dec 17 14:44:30.043: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f06b9c4b-6bcf-4959-a3f1-d9e9cc3d6ff0" in namespace "projected-4772" to be "success or failure"
Dec 17 14:44:30.056: INFO: Pod "pod-projected-secrets-f06b9c4b-6bcf-4959-a3f1-d9e9cc3d6ff0": Phase="Pending", Reason="", readiness=false. Elapsed: 12.352755ms
Dec 17 14:44:32.063: INFO: Pod "pod-projected-secrets-f06b9c4b-6bcf-4959-a3f1-d9e9cc3d6ff0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019947207s
Dec 17 14:44:34.077: INFO: Pod "pod-projected-secrets-f06b9c4b-6bcf-4959-a3f1-d9e9cc3d6ff0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033869879s
Dec 17 14:44:36.085: INFO: Pod "pod-projected-secrets-f06b9c4b-6bcf-4959-a3f1-d9e9cc3d6ff0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041387781s
Dec 17 14:44:38.092: INFO: Pod "pod-projected-secrets-f06b9c4b-6bcf-4959-a3f1-d9e9cc3d6ff0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.048975995s
Dec 17 14:44:40.103: INFO: Pod "pod-projected-secrets-f06b9c4b-6bcf-4959-a3f1-d9e9cc3d6ff0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.05938302s
STEP: Saw pod success
Dec 17 14:44:40.103: INFO: Pod "pod-projected-secrets-f06b9c4b-6bcf-4959-a3f1-d9e9cc3d6ff0" satisfied condition "success or failure"
Dec 17 14:44:40.108: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-f06b9c4b-6bcf-4959-a3f1-d9e9cc3d6ff0 container projected-secret-volume-test: 
STEP: delete the pod
Dec 17 14:44:40.247: INFO: Waiting for pod pod-projected-secrets-f06b9c4b-6bcf-4959-a3f1-d9e9cc3d6ff0 to disappear
Dec 17 14:44:40.254: INFO: Pod pod-projected-secrets-f06b9c4b-6bcf-4959-a3f1-d9e9cc3d6ff0 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:44:40.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4772" for this suite.
Dec 17 14:44:46.350: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:44:46.533: INFO: namespace projected-4772 deletion completed in 6.272639317s

• [SLOW TEST:16.600 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:44:46.534: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 17 14:44:46.710: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e16d5a7d-3392-486f-8b73-eb22355e1b8d" in namespace "downward-api-3197" to be "success or failure"
Dec 17 14:44:46.714: INFO: Pod "downwardapi-volume-e16d5a7d-3392-486f-8b73-eb22355e1b8d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.434187ms
Dec 17 14:44:48.720: INFO: Pod "downwardapi-volume-e16d5a7d-3392-486f-8b73-eb22355e1b8d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010635704s
Dec 17 14:44:50.741: INFO: Pod "downwardapi-volume-e16d5a7d-3392-486f-8b73-eb22355e1b8d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03114156s
Dec 17 14:44:52.796: INFO: Pod "downwardapi-volume-e16d5a7d-3392-486f-8b73-eb22355e1b8d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.086565267s
Dec 17 14:44:54.815: INFO: Pod "downwardapi-volume-e16d5a7d-3392-486f-8b73-eb22355e1b8d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.104942128s
Dec 17 14:44:56.824: INFO: Pod "downwardapi-volume-e16d5a7d-3392-486f-8b73-eb22355e1b8d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.114506548s
STEP: Saw pod success
Dec 17 14:44:56.824: INFO: Pod "downwardapi-volume-e16d5a7d-3392-486f-8b73-eb22355e1b8d" satisfied condition "success or failure"
Dec 17 14:44:56.834: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-e16d5a7d-3392-486f-8b73-eb22355e1b8d container client-container: 
STEP: delete the pod
Dec 17 14:44:56.890: INFO: Waiting for pod downwardapi-volume-e16d5a7d-3392-486f-8b73-eb22355e1b8d to disappear
Dec 17 14:44:56.899: INFO: Pod downwardapi-volume-e16d5a7d-3392-486f-8b73-eb22355e1b8d no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:44:56.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3197" for this suite.
Dec 17 14:45:02.952: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:45:03.038: INFO: namespace downward-api-3197 deletion completed in 6.135556831s

• [SLOW TEST:16.503 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:45:03.038: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Dec 17 14:45:03.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7296'
Dec 17 14:45:03.703: INFO: stderr: ""
Dec 17 14:45:03.703: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Dec 17 14:45:04.710: INFO: Selector matched 1 pods for map[app:redis]
Dec 17 14:45:04.710: INFO: Found 0 / 1
Dec 17 14:45:05.712: INFO: Selector matched 1 pods for map[app:redis]
Dec 17 14:45:05.712: INFO: Found 0 / 1
Dec 17 14:45:06.721: INFO: Selector matched 1 pods for map[app:redis]
Dec 17 14:45:06.722: INFO: Found 0 / 1
Dec 17 14:45:07.714: INFO: Selector matched 1 pods for map[app:redis]
Dec 17 14:45:07.714: INFO: Found 0 / 1
Dec 17 14:45:08.711: INFO: Selector matched 1 pods for map[app:redis]
Dec 17 14:45:08.711: INFO: Found 0 / 1
Dec 17 14:45:09.715: INFO: Selector matched 1 pods for map[app:redis]
Dec 17 14:45:09.716: INFO: Found 0 / 1
Dec 17 14:45:10.713: INFO: Selector matched 1 pods for map[app:redis]
Dec 17 14:45:10.713: INFO: Found 0 / 1
Dec 17 14:45:11.717: INFO: Selector matched 1 pods for map[app:redis]
Dec 17 14:45:11.718: INFO: Found 0 / 1
Dec 17 14:45:12.737: INFO: Selector matched 1 pods for map[app:redis]
Dec 17 14:45:12.737: INFO: Found 1 / 1
Dec 17 14:45:12.738: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Dec 17 14:45:12.748: INFO: Selector matched 1 pods for map[app:redis]
Dec 17 14:45:12.748: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Dec 17 14:45:12.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-8c2qd --namespace=kubectl-7296 -p {"metadata":{"annotations":{"x":"y"}}}'
Dec 17 14:45:13.125: INFO: stderr: ""
Dec 17 14:45:13.125: INFO: stdout: "pod/redis-master-8c2qd patched\n"
STEP: checking annotations
Dec 17 14:45:13.471: INFO: Selector matched 1 pods for map[app:redis]
Dec 17 14:45:13.471: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:45:13.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7296" for this suite.
Dec 17 14:45:35.507: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:45:35.641: INFO: namespace kubectl-7296 deletion completed in 22.158423643s

• [SLOW TEST:32.603 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:45:35.642: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:45:35.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7427" for this suite.
Dec 17 14:45:58.252: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:45:58.385: INFO: namespace pods-7427 deletion completed in 22.505260732s

• [SLOW TEST:22.744 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:45:58.386: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Dec 17 14:46:16.656: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 17 14:46:16.669: INFO: Pod pod-with-prestop-http-hook still exists
Dec 17 14:46:18.670: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 17 14:46:18.688: INFO: Pod pod-with-prestop-http-hook still exists
Dec 17 14:46:20.669: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 17 14:46:20.681: INFO: Pod pod-with-prestop-http-hook still exists
Dec 17 14:46:22.669: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 17 14:46:22.682: INFO: Pod pod-with-prestop-http-hook still exists
Dec 17 14:46:24.669: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 17 14:46:24.678: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:46:24.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-9623" for this suite.
Dec 17 14:46:52.736: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:46:52.891: INFO: namespace container-lifecycle-hook-9623 deletion completed in 28.179298946s

• [SLOW TEST:54.505 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:46:52.892: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 17 14:46:52.985: INFO: Creating ReplicaSet my-hostname-basic-56fb1f72-fc5a-4b1d-9e92-f6dde322d4a7
Dec 17 14:46:53.043: INFO: Pod name my-hostname-basic-56fb1f72-fc5a-4b1d-9e92-f6dde322d4a7: Found 0 pods out of 1
Dec 17 14:46:58.094: INFO: Pod name my-hostname-basic-56fb1f72-fc5a-4b1d-9e92-f6dde322d4a7: Found 1 pods out of 1
Dec 17 14:46:58.094: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-56fb1f72-fc5a-4b1d-9e92-f6dde322d4a7" is running
Dec 17 14:47:08.113: INFO: Pod "my-hostname-basic-56fb1f72-fc5a-4b1d-9e92-f6dde322d4a7-gkd5p" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-17 14:46:53 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-17 14:46:53 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-56fb1f72-fc5a-4b1d-9e92-f6dde322d4a7]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-17 14:46:53 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-56fb1f72-fc5a-4b1d-9e92-f6dde322d4a7]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-17 14:46:53 +0000 UTC Reason: Message:}])
Dec 17 14:47:08.114: INFO: Trying to dial the pod
Dec 17 14:47:13.177: INFO: Controller my-hostname-basic-56fb1f72-fc5a-4b1d-9e92-f6dde322d4a7: Got expected result from replica 1 [my-hostname-basic-56fb1f72-fc5a-4b1d-9e92-f6dde322d4a7-gkd5p]: "my-hostname-basic-56fb1f72-fc5a-4b1d-9e92-f6dde322d4a7-gkd5p", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:47:13.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-7529" for this suite.
Dec 17 14:47:19.272: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:47:19.388: INFO: namespace replicaset-7529 deletion completed in 6.203575457s

• [SLOW TEST:26.497 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:47:19.389: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-34460d5b-1802-4d72-8075-5e8af714da1c
STEP: Creating a pod to test consume secrets
Dec 17 14:47:19.563: INFO: Waiting up to 5m0s for pod "pod-secrets-65c105d0-e380-48ec-ab34-3bccb52364dc" in namespace "secrets-2117" to be "success or failure"
Dec 17 14:47:19.572: INFO: Pod "pod-secrets-65c105d0-e380-48ec-ab34-3bccb52364dc": Phase="Pending", Reason="", readiness=false. Elapsed: 9.555651ms
Dec 17 14:47:21.583: INFO: Pod "pod-secrets-65c105d0-e380-48ec-ab34-3bccb52364dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020430186s
Dec 17 14:47:23.594: INFO: Pod "pod-secrets-65c105d0-e380-48ec-ab34-3bccb52364dc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030696376s
Dec 17 14:47:25.601: INFO: Pod "pod-secrets-65c105d0-e380-48ec-ab34-3bccb52364dc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038540517s
Dec 17 14:47:27.611: INFO: Pod "pod-secrets-65c105d0-e380-48ec-ab34-3bccb52364dc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.048182694s
Dec 17 14:47:29.814: INFO: Pod "pod-secrets-65c105d0-e380-48ec-ab34-3bccb52364dc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.251426439s
Dec 17 14:47:31.825: INFO: Pod "pod-secrets-65c105d0-e380-48ec-ab34-3bccb52364dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.262390206s
STEP: Saw pod success
Dec 17 14:47:31.825: INFO: Pod "pod-secrets-65c105d0-e380-48ec-ab34-3bccb52364dc" satisfied condition "success or failure"
Dec 17 14:47:31.836: INFO: Trying to get logs from node iruya-node pod pod-secrets-65c105d0-e380-48ec-ab34-3bccb52364dc container secret-volume-test: 
STEP: delete the pod
Dec 17 14:47:31.992: INFO: Waiting for pod pod-secrets-65c105d0-e380-48ec-ab34-3bccb52364dc to disappear
Dec 17 14:47:31.999: INFO: Pod pod-secrets-65c105d0-e380-48ec-ab34-3bccb52364dc no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:47:31.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2117" for this suite.
Dec 17 14:47:38.052: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:47:38.208: INFO: namespace secrets-2117 deletion completed in 6.203896187s

• [SLOW TEST:18.819 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:47:38.208: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap that has name configmap-test-emptyKey-f2a5845f-905f-444c-a2ee-fba52e2c8d2c
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:47:38.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6702" for this suite.
Dec 17 14:47:44.413: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:47:44.539: INFO: namespace configmap-6702 deletion completed in 6.239271371s

• [SLOW TEST:6.331 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:47:44.540: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 17 14:47:44.679: INFO: Waiting up to 5m0s for pod "downwardapi-volume-813efb71-40bc-49d3-ac70-aec8f1c9e2b3" in namespace "downward-api-7598" to be "success or failure"
Dec 17 14:47:44.688: INFO: Pod "downwardapi-volume-813efb71-40bc-49d3-ac70-aec8f1c9e2b3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.53473ms
Dec 17 14:47:46.710: INFO: Pod "downwardapi-volume-813efb71-40bc-49d3-ac70-aec8f1c9e2b3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030703019s
Dec 17 14:47:48.719: INFO: Pod "downwardapi-volume-813efb71-40bc-49d3-ac70-aec8f1c9e2b3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039628809s
Dec 17 14:47:50.729: INFO: Pod "downwardapi-volume-813efb71-40bc-49d3-ac70-aec8f1c9e2b3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050114763s
Dec 17 14:47:52.742: INFO: Pod "downwardapi-volume-813efb71-40bc-49d3-ac70-aec8f1c9e2b3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.063180642s
Dec 17 14:47:54.766: INFO: Pod "downwardapi-volume-813efb71-40bc-49d3-ac70-aec8f1c9e2b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.086509773s
STEP: Saw pod success
Dec 17 14:47:54.766: INFO: Pod "downwardapi-volume-813efb71-40bc-49d3-ac70-aec8f1c9e2b3" satisfied condition "success or failure"
Dec 17 14:47:54.776: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-813efb71-40bc-49d3-ac70-aec8f1c9e2b3 container client-container: 
STEP: delete the pod
Dec 17 14:47:55.164: INFO: Waiting for pod downwardapi-volume-813efb71-40bc-49d3-ac70-aec8f1c9e2b3 to disappear
Dec 17 14:47:55.171: INFO: Pod downwardapi-volume-813efb71-40bc-49d3-ac70-aec8f1c9e2b3 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:47:55.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7598" for this suite.
Dec 17 14:48:01.205: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:48:01.349: INFO: namespace downward-api-7598 deletion completed in 6.170325967s

• [SLOW TEST:16.809 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:48:01.349: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Dec 17 14:48:01.490: INFO: Waiting up to 5m0s for pod "pod-26344e72-e39a-44c5-bf67-0ee3e0f353bb" in namespace "emptydir-295" to be "success or failure"
Dec 17 14:48:01.511: INFO: Pod "pod-26344e72-e39a-44c5-bf67-0ee3e0f353bb": Phase="Pending", Reason="", readiness=false. Elapsed: 20.422631ms
Dec 17 14:48:03.518: INFO: Pod "pod-26344e72-e39a-44c5-bf67-0ee3e0f353bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027066537s
Dec 17 14:48:05.530: INFO: Pod "pod-26344e72-e39a-44c5-bf67-0ee3e0f353bb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039461669s
Dec 17 14:48:07.551: INFO: Pod "pod-26344e72-e39a-44c5-bf67-0ee3e0f353bb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059803712s
Dec 17 14:48:09.582: INFO: Pod "pod-26344e72-e39a-44c5-bf67-0ee3e0f353bb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.091216915s
Dec 17 14:48:11.595: INFO: Pod "pod-26344e72-e39a-44c5-bf67-0ee3e0f353bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.103993551s
STEP: Saw pod success
Dec 17 14:48:11.595: INFO: Pod "pod-26344e72-e39a-44c5-bf67-0ee3e0f353bb" satisfied condition "success or failure"
Dec 17 14:48:11.600: INFO: Trying to get logs from node iruya-node pod pod-26344e72-e39a-44c5-bf67-0ee3e0f353bb container test-container: 
STEP: delete the pod
Dec 17 14:48:11.734: INFO: Waiting for pod pod-26344e72-e39a-44c5-bf67-0ee3e0f353bb to disappear
Dec 17 14:48:11.747: INFO: Pod pod-26344e72-e39a-44c5-bf67-0ee3e0f353bb no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:48:11.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-295" for this suite.
Dec 17 14:48:17.808: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:48:17.981: INFO: namespace emptydir-295 deletion completed in 6.223640228s

• [SLOW TEST:16.632 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:48:17.982: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 17 14:48:18.118: INFO: Waiting up to 5m0s for pod "downwardapi-volume-51ff3f29-beb0-45c4-90b6-f348447e279f" in namespace "downward-api-4244" to be "success or failure"
Dec 17 14:48:18.129: INFO: Pod "downwardapi-volume-51ff3f29-beb0-45c4-90b6-f348447e279f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.028364ms
Dec 17 14:48:20.142: INFO: Pod "downwardapi-volume-51ff3f29-beb0-45c4-90b6-f348447e279f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023288686s
Dec 17 14:48:22.161: INFO: Pod "downwardapi-volume-51ff3f29-beb0-45c4-90b6-f348447e279f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041983505s
Dec 17 14:48:24.167: INFO: Pod "downwardapi-volume-51ff3f29-beb0-45c4-90b6-f348447e279f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048272663s
Dec 17 14:48:26.175: INFO: Pod "downwardapi-volume-51ff3f29-beb0-45c4-90b6-f348447e279f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.056012804s
Dec 17 14:48:28.182: INFO: Pod "downwardapi-volume-51ff3f29-beb0-45c4-90b6-f348447e279f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.063796835s
STEP: Saw pod success
Dec 17 14:48:28.183: INFO: Pod "downwardapi-volume-51ff3f29-beb0-45c4-90b6-f348447e279f" satisfied condition "success or failure"
Dec 17 14:48:28.185: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-51ff3f29-beb0-45c4-90b6-f348447e279f container client-container: 
STEP: delete the pod
Dec 17 14:48:28.228: INFO: Waiting for pod downwardapi-volume-51ff3f29-beb0-45c4-90b6-f348447e279f to disappear
Dec 17 14:48:28.233: INFO: Pod downwardapi-volume-51ff3f29-beb0-45c4-90b6-f348447e279f no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:48:28.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4244" for this suite.
Dec 17 14:48:34.274: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:48:34.399: INFO: namespace downward-api-4244 deletion completed in 6.161209582s

• [SLOW TEST:16.417 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:48:34.399: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Dec 17 14:51:36.806: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 14:51:36.895: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 14:51:38.896: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 14:51:38.934: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 14:51:40.896: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 14:51:40.905: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 14:51:42.895: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 14:51:42.904: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 14:51:44.895: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 14:51:44.905: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 14:51:46.895: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 14:51:46.905: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 14:51:48.896: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 14:51:48.904: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 14:51:50.896: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 14:51:50.904: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 14:51:52.895: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 14:51:52.904: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 14:51:54.895: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 14:51:54.903: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 14:51:56.896: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 14:51:56.910: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 14:51:58.895: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 14:51:58.906: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 14:52:00.896: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 14:52:00.905: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 14:52:02.896: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 14:52:02.905: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 14:52:04.895: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 14:52:04.907: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 14:52:06.895: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 14:52:06.903: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 14:52:08.895: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 14:52:08.903: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 14:52:10.896: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 14:52:11.040: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 14:52:12.896: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 14:52:12.905: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 14:52:14.896: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 14:52:14.907: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 14:52:16.896: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 14:52:16.903: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 14:52:18.896: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 14:52:18.903: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 14:52:20.896: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 14:52:20.902: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 14:52:22.896: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 14:52:22.910: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 14:52:24.895: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 14:52:24.903: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 14:52:26.895: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 14:52:26.907: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 14:52:28.896: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 14:52:28.912: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 14:52:30.896: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 14:52:30.908: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 14:52:32.896: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 14:52:32.912: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 14:52:34.895: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 14:52:34.902: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 14:52:36.895: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 14:52:36.904: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 14:52:38.895: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 14:52:38.908: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 14:52:40.895: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 14:52:40.904: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 14:52:42.896: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 14:52:42.909: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 14:52:44.896: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 14:52:44.907: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 14:52:46.896: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 14:52:46.908: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 14:52:48.895: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 14:52:48.907: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 14:52:50.895: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 14:52:50.907: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 14:52:52.895: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 14:52:52.903: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 14:52:54.895: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 14:52:54.901: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 14:52:56.896: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 14:52:56.908: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 14:52:58.896: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 14:52:58.901: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 17 14:53:00.895: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 17 14:53:00.914: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:53:00.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-2215" for this suite.
Dec 17 14:53:22.954: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:53:23.088: INFO: namespace container-lifecycle-hook-2215 deletion completed in 22.168935789s

• [SLOW TEST:288.689 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:53:23.089: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 17 14:53:23.206: INFO: Create a RollingUpdate DaemonSet
Dec 17 14:53:23.260: INFO: Check that daemon pods launch on every node of the cluster
Dec 17 14:53:23.276: INFO: Number of nodes with available pods: 0
Dec 17 14:53:23.276: INFO: Node iruya-node is running more than one daemon pod
Dec 17 14:53:24.510: INFO: Number of nodes with available pods: 0
Dec 17 14:53:24.511: INFO: Node iruya-node is running more than one daemon pod
Dec 17 14:53:25.296: INFO: Number of nodes with available pods: 0
Dec 17 14:53:25.297: INFO: Node iruya-node is running more than one daemon pod
Dec 17 14:53:26.606: INFO: Number of nodes with available pods: 0
Dec 17 14:53:26.606: INFO: Node iruya-node is running more than one daemon pod
Dec 17 14:53:27.295: INFO: Number of nodes with available pods: 0
Dec 17 14:53:27.295: INFO: Node iruya-node is running more than one daemon pod
Dec 17 14:53:28.316: INFO: Number of nodes with available pods: 0
Dec 17 14:53:28.316: INFO: Node iruya-node is running more than one daemon pod
Dec 17 14:53:31.073: INFO: Number of nodes with available pods: 0
Dec 17 14:53:31.073: INFO: Node iruya-node is running more than one daemon pod
Dec 17 14:53:31.389: INFO: Number of nodes with available pods: 0
Dec 17 14:53:31.389: INFO: Node iruya-node is running more than one daemon pod
Dec 17 14:53:32.292: INFO: Number of nodes with available pods: 0
Dec 17 14:53:32.292: INFO: Node iruya-node is running more than one daemon pod
Dec 17 14:53:33.286: INFO: Number of nodes with available pods: 0
Dec 17 14:53:33.286: INFO: Node iruya-node is running more than one daemon pod
Dec 17 14:53:34.287: INFO: Number of nodes with available pods: 1
Dec 17 14:53:34.287: INFO: Node iruya-node is running more than one daemon pod
Dec 17 14:53:35.310: INFO: Number of nodes with available pods: 1
Dec 17 14:53:35.310: INFO: Node iruya-node is running more than one daemon pod
Dec 17 14:53:36.290: INFO: Number of nodes with available pods: 2
Dec 17 14:53:36.290: INFO: Number of running nodes: 2, number of available pods: 2
Dec 17 14:53:36.290: INFO: Update the DaemonSet to trigger a rollout
Dec 17 14:53:36.302: INFO: Updating DaemonSet daemon-set
Dec 17 14:53:48.341: INFO: Roll back the DaemonSet before rollout is complete
Dec 17 14:53:48.358: INFO: Updating DaemonSet daemon-set
Dec 17 14:53:48.358: INFO: Make sure DaemonSet rollback is complete
Dec 17 14:53:48.375: INFO: Wrong image for pod: daemon-set-r7mp9. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Dec 17 14:53:48.375: INFO: Pod daemon-set-r7mp9 is not available
Dec 17 14:53:49.718: INFO: Wrong image for pod: daemon-set-r7mp9. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Dec 17 14:53:49.719: INFO: Pod daemon-set-r7mp9 is not available
Dec 17 14:53:50.410: INFO: Wrong image for pod: daemon-set-r7mp9. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Dec 17 14:53:50.410: INFO: Pod daemon-set-r7mp9 is not available
Dec 17 14:53:51.419: INFO: Wrong image for pod: daemon-set-r7mp9. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Dec 17 14:53:51.419: INFO: Pod daemon-set-r7mp9 is not available
Dec 17 14:53:52.409: INFO: Wrong image for pod: daemon-set-r7mp9. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Dec 17 14:53:52.409: INFO: Pod daemon-set-r7mp9 is not available
Dec 17 14:53:53.417: INFO: Pod daemon-set-k6pvj is not available
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-842, will wait for the garbage collector to delete the pods
Dec 17 14:53:53.569: INFO: Deleting DaemonSet.extensions daemon-set took: 77.944157ms
Dec 17 14:53:54.870: INFO: Terminating DaemonSet.extensions daemon-set pods took: 1.300774763s
Dec 17 14:54:01.374: INFO: Number of nodes with available pods: 0
Dec 17 14:54:01.374: INFO: Number of running nodes: 0, number of available pods: 0
Dec 17 14:54:01.403: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-842/daemonsets","resourceVersion":"17027187"},"items":null}

Dec 17 14:54:01.406: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-842/pods","resourceVersion":"17027187"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:54:01.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-842" for this suite.
Dec 17 14:54:07.441: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:54:07.593: INFO: namespace daemonsets-842 deletion completed in 6.174648303s

• [SLOW TEST:44.503 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:54:07.594: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Dec 17 14:54:07.669: INFO: Waiting up to 5m0s for pod "pod-118184cb-aa06-48b4-bc40-823b57487a33" in namespace "emptydir-9727" to be "success or failure"
Dec 17 14:54:07.710: INFO: Pod "pod-118184cb-aa06-48b4-bc40-823b57487a33": Phase="Pending", Reason="", readiness=false. Elapsed: 41.038463ms
Dec 17 14:54:09.718: INFO: Pod "pod-118184cb-aa06-48b4-bc40-823b57487a33": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049658018s
Dec 17 14:54:11.727: INFO: Pod "pod-118184cb-aa06-48b4-bc40-823b57487a33": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058853345s
Dec 17 14:54:13.762: INFO: Pod "pod-118184cb-aa06-48b4-bc40-823b57487a33": Phase="Pending", Reason="", readiness=false. Elapsed: 6.093537562s
Dec 17 14:54:15.770: INFO: Pod "pod-118184cb-aa06-48b4-bc40-823b57487a33": Phase="Pending", Reason="", readiness=false. Elapsed: 8.101073449s
Dec 17 14:54:17.780: INFO: Pod "pod-118184cb-aa06-48b4-bc40-823b57487a33": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.111016201s
STEP: Saw pod success
Dec 17 14:54:17.780: INFO: Pod "pod-118184cb-aa06-48b4-bc40-823b57487a33" satisfied condition "success or failure"
Dec 17 14:54:17.791: INFO: Trying to get logs from node iruya-node pod pod-118184cb-aa06-48b4-bc40-823b57487a33 container test-container: 
STEP: delete the pod
Dec 17 14:54:17.985: INFO: Waiting for pod pod-118184cb-aa06-48b4-bc40-823b57487a33 to disappear
Dec 17 14:54:18.112: INFO: Pod pod-118184cb-aa06-48b4-bc40-823b57487a33 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:54:18.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9727" for this suite.
Dec 17 14:54:24.157: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:54:24.252: INFO: namespace emptydir-9727 deletion completed in 6.130816262s

• [SLOW TEST:16.659 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:54:24.253: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Dec 17 14:54:33.557: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:54:33.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-4748" for this suite.
Dec 17 14:54:39.660: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:54:39.752: INFO: namespace container-runtime-4748 deletion completed in 6.115388114s

• [SLOW TEST:15.500 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:54:39.753: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Dec 17 14:54:39.862: INFO: Waiting up to 5m0s for pod "pod-ee330b99-fdfa-43fb-a703-0ffcd97c8cda" in namespace "emptydir-2540" to be "success or failure"
Dec 17 14:54:39.884: INFO: Pod "pod-ee330b99-fdfa-43fb-a703-0ffcd97c8cda": Phase="Pending", Reason="", readiness=false. Elapsed: 21.835101ms
Dec 17 14:54:41.895: INFO: Pod "pod-ee330b99-fdfa-43fb-a703-0ffcd97c8cda": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032658242s
Dec 17 14:54:43.909: INFO: Pod "pod-ee330b99-fdfa-43fb-a703-0ffcd97c8cda": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046440011s
Dec 17 14:54:45.924: INFO: Pod "pod-ee330b99-fdfa-43fb-a703-0ffcd97c8cda": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061220554s
Dec 17 14:54:47.935: INFO: Pod "pod-ee330b99-fdfa-43fb-a703-0ffcd97c8cda": Phase="Pending", Reason="", readiness=false. Elapsed: 8.072586204s
Dec 17 14:54:49.944: INFO: Pod "pod-ee330b99-fdfa-43fb-a703-0ffcd97c8cda": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.081484559s
STEP: Saw pod success
Dec 17 14:54:49.944: INFO: Pod "pod-ee330b99-fdfa-43fb-a703-0ffcd97c8cda" satisfied condition "success or failure"
Dec 17 14:54:49.949: INFO: Trying to get logs from node iruya-node pod pod-ee330b99-fdfa-43fb-a703-0ffcd97c8cda container test-container: 
STEP: delete the pod
Dec 17 14:54:50.156: INFO: Waiting for pod pod-ee330b99-fdfa-43fb-a703-0ffcd97c8cda to disappear
Dec 17 14:54:50.167: INFO: Pod pod-ee330b99-fdfa-43fb-a703-0ffcd97c8cda no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:54:50.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2540" for this suite.
Dec 17 14:54:56.201: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:54:56.340: INFO: namespace emptydir-2540 deletion completed in 6.156935359s

• [SLOW TEST:16.588 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:54:56.341: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-58b5c423-5b36-4fcd-9ed0-e98528d9a703
STEP: Creating a pod to test consume secrets
Dec 17 14:54:56.462: INFO: Waiting up to 5m0s for pod "pod-secrets-a1da6c55-263e-4acf-96f0-3a76bcdcf8ff" in namespace "secrets-5138" to be "success or failure"
Dec 17 14:54:56.493: INFO: Pod "pod-secrets-a1da6c55-263e-4acf-96f0-3a76bcdcf8ff": Phase="Pending", Reason="", readiness=false. Elapsed: 31.189449ms
Dec 17 14:54:58.508: INFO: Pod "pod-secrets-a1da6c55-263e-4acf-96f0-3a76bcdcf8ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046264393s
Dec 17 14:55:00.519: INFO: Pod "pod-secrets-a1da6c55-263e-4acf-96f0-3a76bcdcf8ff": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057687535s
Dec 17 14:55:02.542: INFO: Pod "pod-secrets-a1da6c55-263e-4acf-96f0-3a76bcdcf8ff": Phase="Pending", Reason="", readiness=false. Elapsed: 6.080426824s
Dec 17 14:55:04.562: INFO: Pod "pod-secrets-a1da6c55-263e-4acf-96f0-3a76bcdcf8ff": Phase="Pending", Reason="", readiness=false. Elapsed: 8.100336088s
Dec 17 14:55:06.599: INFO: Pod "pod-secrets-a1da6c55-263e-4acf-96f0-3a76bcdcf8ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.137767447s
STEP: Saw pod success
Dec 17 14:55:06.600: INFO: Pod "pod-secrets-a1da6c55-263e-4acf-96f0-3a76bcdcf8ff" satisfied condition "success or failure"
Dec 17 14:55:06.607: INFO: Trying to get logs from node iruya-node pod pod-secrets-a1da6c55-263e-4acf-96f0-3a76bcdcf8ff container secret-volume-test: 
STEP: delete the pod
Dec 17 14:55:06.733: INFO: Waiting for pod pod-secrets-a1da6c55-263e-4acf-96f0-3a76bcdcf8ff to disappear
Dec 17 14:55:06.757: INFO: Pod pod-secrets-a1da6c55-263e-4acf-96f0-3a76bcdcf8ff no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:55:06.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5138" for this suite.
Dec 17 14:55:12.813: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:55:12.961: INFO: namespace secrets-5138 deletion completed in 6.181931667s

• [SLOW TEST:16.620 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:55:12.962: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod test-webserver-61efb68b-eb35-4e6a-a515-bf777f63e9b7 in namespace container-probe-6821
Dec 17 14:55:21.137: INFO: Started pod test-webserver-61efb68b-eb35-4e6a-a515-bf777f63e9b7 in namespace container-probe-6821
STEP: checking the pod's current state and verifying that restartCount is present
Dec 17 14:55:21.162: INFO: Initial restart count of pod test-webserver-61efb68b-eb35-4e6a-a515-bf777f63e9b7 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:59:22.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-6821" for this suite.
Dec 17 14:59:29.131: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 14:59:29.319: INFO: namespace container-probe-6821 deletion completed in 6.245060533s

• [SLOW TEST:256.358 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 14:59:29.319: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Dec 17 14:59:42.095: INFO: Successfully updated pod "annotationupdateac9d495d-3233-4fcb-ab2f-b16fad666a6d"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 14:59:44.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4930" for this suite.
Dec 17 15:00:06.192: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 15:00:06.323: INFO: namespace downward-api-4930 deletion completed in 22.161340101s

• [SLOW TEST:37.004 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 15:00:06.324: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name projected-secret-test-a657b291-383e-44bd-bab5-6c7a7230330d
STEP: Creating a pod to test consume secrets
Dec 17 15:00:06.437: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1bc1e4ef-8539-46cf-b0c3-87d189b0c22a" in namespace "projected-4474" to be "success or failure"
Dec 17 15:00:06.456: INFO: Pod "pod-projected-secrets-1bc1e4ef-8539-46cf-b0c3-87d189b0c22a": Phase="Pending", Reason="", readiness=false. Elapsed: 18.675032ms
Dec 17 15:00:08.470: INFO: Pod "pod-projected-secrets-1bc1e4ef-8539-46cf-b0c3-87d189b0c22a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032625509s
Dec 17 15:00:10.487: INFO: Pod "pod-projected-secrets-1bc1e4ef-8539-46cf-b0c3-87d189b0c22a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049564094s
Dec 17 15:00:12.551: INFO: Pod "pod-projected-secrets-1bc1e4ef-8539-46cf-b0c3-87d189b0c22a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.113693391s
Dec 17 15:00:14.566: INFO: Pod "pod-projected-secrets-1bc1e4ef-8539-46cf-b0c3-87d189b0c22a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.129186447s
Dec 17 15:00:16.584: INFO: Pod "pod-projected-secrets-1bc1e4ef-8539-46cf-b0c3-87d189b0c22a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.147184616s
STEP: Saw pod success
Dec 17 15:00:16.585: INFO: Pod "pod-projected-secrets-1bc1e4ef-8539-46cf-b0c3-87d189b0c22a" satisfied condition "success or failure"
Dec 17 15:00:16.601: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-1bc1e4ef-8539-46cf-b0c3-87d189b0c22a container secret-volume-test: 
STEP: delete the pod
Dec 17 15:00:16.699: INFO: Waiting for pod pod-projected-secrets-1bc1e4ef-8539-46cf-b0c3-87d189b0c22a to disappear
Dec 17 15:00:16.708: INFO: Pod pod-projected-secrets-1bc1e4ef-8539-46cf-b0c3-87d189b0c22a no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 15:00:16.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4474" for this suite.
Dec 17 15:00:22.797: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 15:00:22.936: INFO: namespace projected-4474 deletion completed in 6.220123795s

• [SLOW TEST:16.612 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 15:00:22.936: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-2536
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 17 15:00:23.102: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 17 15:00:57.337: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.1 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2536 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 17 15:00:57.338: INFO: >>> kubeConfig: /root/.kube/config
Dec 17 15:00:59.008: INFO: Found all expected endpoints: [netserver-0]
Dec 17 15:00:59.015: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2536 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 17 15:00:59.015: INFO: >>> kubeConfig: /root/.kube/config
Dec 17 15:01:00.367: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 15:01:00.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-2536" for this suite.
Dec 17 15:01:24.458: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 15:01:24.603: INFO: namespace pod-network-test-2536 deletion completed in 24.222214961s

• [SLOW TEST:61.667 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 15:01:24.604: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Dec 17 15:01:35.441: INFO: Successfully updated pod "labelsupdate1b0152ce-7f57-436b-97d5-50689da177e4"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 15:01:37.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5918" for this suite.
Dec 17 15:01:59.581: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 15:01:59.709: INFO: namespace downward-api-5918 deletion completed in 22.155416401s

• [SLOW TEST:35.105 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 15:01:59.710: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210
STEP: creating the pod
Dec 17 15:01:59.770: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9009'
Dec 17 15:02:02.297: INFO: stderr: ""
Dec 17 15:02:02.297: INFO: stdout: "pod/pause created\n"
Dec 17 15:02:02.297: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Dec 17 15:02:02.297: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-9009" to be "running and ready"
Dec 17 15:02:02.397: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 99.075564ms
Dec 17 15:02:04.403: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10582922s
Dec 17 15:02:06.421: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.123425384s
Dec 17 15:02:08.449: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.151337402s
Dec 17 15:02:10.461: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 8.163342655s
Dec 17 15:02:10.461: INFO: Pod "pause" satisfied condition "running and ready"
Dec 17 15:02:10.461: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: adding the label testing-label with value testing-label-value to a pod
Dec 17 15:02:10.462: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-9009'
Dec 17 15:02:10.689: INFO: stderr: ""
Dec 17 15:02:10.689: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Dec 17 15:02:10.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-9009'
Dec 17 15:02:10.829: INFO: stderr: ""
Dec 17 15:02:10.829: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          8s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Dec 17 15:02:10.829: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-9009'
Dec 17 15:02:10.980: INFO: stderr: ""
Dec 17 15:02:10.980: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Dec 17 15:02:10.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-9009'
Dec 17 15:02:11.101: INFO: stderr: ""
Dec 17 15:02:11.101: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          9s    \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217
STEP: using delete to clean up resources
Dec 17 15:02:11.101: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9009'
Dec 17 15:02:11.227: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 17 15:02:11.228: INFO: stdout: "pod \"pause\" force deleted\n"
Dec 17 15:02:11.228: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-9009'
Dec 17 15:02:11.376: INFO: stderr: "No resources found.\n"
Dec 17 15:02:11.376: INFO: stdout: ""
Dec 17 15:02:11.377: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-9009 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 17 15:02:11.469: INFO: stderr: ""
Dec 17 15:02:11.469: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 15:02:11.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9009" for this suite.
Dec 17 15:02:17.497: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 15:02:17.908: INFO: namespace kubectl-9009 deletion completed in 6.433779637s

• [SLOW TEST:18.199 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 15:02:17.909: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-1663fd3b-29ff-478c-8394-6a4ed7fddf88
STEP: Creating a pod to test consume configMaps
Dec 17 15:02:18.042: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-facce2e7-978a-4aae-9a03-1a1ded205ba6" in namespace "projected-3199" to be "success or failure"
Dec 17 15:02:18.057: INFO: Pod "pod-projected-configmaps-facce2e7-978a-4aae-9a03-1a1ded205ba6": Phase="Pending", Reason="", readiness=false. Elapsed: 13.642202ms
Dec 17 15:02:20.065: INFO: Pod "pod-projected-configmaps-facce2e7-978a-4aae-9a03-1a1ded205ba6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022017931s
Dec 17 15:02:22.071: INFO: Pod "pod-projected-configmaps-facce2e7-978a-4aae-9a03-1a1ded205ba6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027943468s
Dec 17 15:02:24.081: INFO: Pod "pod-projected-configmaps-facce2e7-978a-4aae-9a03-1a1ded205ba6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037847789s
Dec 17 15:02:26.088: INFO: Pod "pod-projected-configmaps-facce2e7-978a-4aae-9a03-1a1ded205ba6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.044636043s
Dec 17 15:02:28.238: INFO: Pod "pod-projected-configmaps-facce2e7-978a-4aae-9a03-1a1ded205ba6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.195182642s
STEP: Saw pod success
Dec 17 15:02:28.239: INFO: Pod "pod-projected-configmaps-facce2e7-978a-4aae-9a03-1a1ded205ba6" satisfied condition "success or failure"
Dec 17 15:02:28.244: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-facce2e7-978a-4aae-9a03-1a1ded205ba6 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 17 15:02:28.982: INFO: Waiting for pod pod-projected-configmaps-facce2e7-978a-4aae-9a03-1a1ded205ba6 to disappear
Dec 17 15:02:28.991: INFO: Pod pod-projected-configmaps-facce2e7-978a-4aae-9a03-1a1ded205ba6 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 15:02:28.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3199" for this suite.
Dec 17 15:02:35.021: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 15:02:35.125: INFO: namespace projected-3199 deletion completed in 6.128390677s

• [SLOW TEST:17.217 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 15:02:35.126: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 15:02:35.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2379" for this suite.
Dec 17 15:02:41.248: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 15:02:41.416: INFO: namespace services-2379 deletion completed in 6.186367833s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:6.290 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 15:02:41.417: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 15:02:51.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-7085" for this suite.
Dec 17 15:03:37.695: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 15:03:37.841: INFO: namespace kubelet-test-7085 deletion completed in 46.17347313s

• [SLOW TEST:56.424 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 15:03:37.841: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-494fa6a1-6a96-4209-b298-88b2fe815094
STEP: Creating configMap with name cm-test-opt-upd-9b31fe5e-d979-44ec-9a30-9e6feb4f0445
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-494fa6a1-6a96-4209-b298-88b2fe815094
STEP: Updating configmap cm-test-opt-upd-9b31fe5e-d979-44ec-9a30-9e6feb4f0445
STEP: Creating configMap with name cm-test-opt-create-af49d455-e3e7-420f-ad0b-a829bbd3c7ed
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 15:04:57.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-45" for this suite.
Dec 17 15:05:19.985: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 15:05:20.094: INFO: namespace projected-45 deletion completed in 22.132279418s

• [SLOW TEST:102.253 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 15:05:20.094: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Dec 17 15:05:40.252: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 17 15:05:40.264: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 17 15:05:42.264: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 17 15:05:42.272: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 17 15:05:44.264: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 17 15:05:44.273: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 17 15:05:46.264: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 17 15:05:46.274: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 17 15:05:48.265: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 17 15:05:48.279: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 17 15:05:50.264: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 17 15:05:50.280: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 17 15:05:52.264: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 17 15:05:52.273: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 17 15:05:54.264: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 17 15:05:54.274: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 17 15:05:56.265: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 17 15:05:56.285: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 17 15:05:58.264: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 17 15:05:58.279: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 17 15:06:00.264: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 17 15:06:00.272: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 17 15:06:02.264: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 17 15:06:02.274: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 17 15:06:04.265: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 17 15:06:05.295: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 17 15:06:06.265: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 17 15:06:06.283: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 17 15:06:08.265: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 17 15:06:08.309: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 15:06:08.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-7930" for this suite.
Dec 17 15:06:30.376: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 15:06:30.560: INFO: namespace container-lifecycle-hook-7930 deletion completed in 22.21028387s

• [SLOW TEST:70.466 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 15:06:30.561: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 17 15:06:30.706: INFO: Waiting up to 5m0s for pod "downwardapi-volume-61eb2466-c299-468c-af83-9cd92b830b97" in namespace "projected-2963" to be "success or failure"
Dec 17 15:06:30.712: INFO: Pod "downwardapi-volume-61eb2466-c299-468c-af83-9cd92b830b97": Phase="Pending", Reason="", readiness=false. Elapsed: 6.280936ms
Dec 17 15:06:32.724: INFO: Pod "downwardapi-volume-61eb2466-c299-468c-af83-9cd92b830b97": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017751728s
Dec 17 15:06:34.744: INFO: Pod "downwardapi-volume-61eb2466-c299-468c-af83-9cd92b830b97": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03778678s
Dec 17 15:06:36.753: INFO: Pod "downwardapi-volume-61eb2466-c299-468c-af83-9cd92b830b97": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047327356s
Dec 17 15:06:38.806: INFO: Pod "downwardapi-volume-61eb2466-c299-468c-af83-9cd92b830b97": Phase="Pending", Reason="", readiness=false. Elapsed: 8.099560913s
Dec 17 15:06:40.828: INFO: Pod "downwardapi-volume-61eb2466-c299-468c-af83-9cd92b830b97": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.122217277s
STEP: Saw pod success
Dec 17 15:06:40.828: INFO: Pod "downwardapi-volume-61eb2466-c299-468c-af83-9cd92b830b97" satisfied condition "success or failure"
Dec 17 15:06:40.833: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-61eb2466-c299-468c-af83-9cd92b830b97 container client-container: 
STEP: delete the pod
Dec 17 15:06:40.907: INFO: Waiting for pod downwardapi-volume-61eb2466-c299-468c-af83-9cd92b830b97 to disappear
Dec 17 15:06:40.915: INFO: Pod downwardapi-volume-61eb2466-c299-468c-af83-9cd92b830b97 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 15:06:40.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2963" for this suite.
Dec 17 15:06:47.034: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 15:06:47.132: INFO: namespace projected-2963 deletion completed in 6.212357107s

• [SLOW TEST:16.572 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 15:06:47.134: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-6633
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Dec 17 15:06:47.315: INFO: Found 0 stateful pods, waiting for 3
Dec 17 15:06:57.448: INFO: Found 2 stateful pods, waiting for 3
Dec 17 15:07:07.326: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 17 15:07:07.327: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 17 15:07:07.327: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 17 15:07:17.324: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 17 15:07:17.324: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 17 15:07:17.324: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Dec 17 15:07:17.338: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6633 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 17 15:07:17.942: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 17 15:07:17.942: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 17 15:07:17.942: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Dec 17 15:07:28.012: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Dec 17 15:07:38.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6633 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 17 15:07:38.698: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Dec 17 15:07:38.698: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 17 15:07:38.698: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 17 15:07:48.777: INFO: Waiting for StatefulSet statefulset-6633/ss2 to complete update
Dec 17 15:07:48.777: INFO: Waiting for Pod statefulset-6633/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 17 15:07:48.777: INFO: Waiting for Pod statefulset-6633/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 17 15:07:48.777: INFO: Waiting for Pod statefulset-6633/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 17 15:07:58.798: INFO: Waiting for StatefulSet statefulset-6633/ss2 to complete update
Dec 17 15:07:58.798: INFO: Waiting for Pod statefulset-6633/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 17 15:07:58.798: INFO: Waiting for Pod statefulset-6633/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 17 15:08:08.885: INFO: Waiting for StatefulSet statefulset-6633/ss2 to complete update
Dec 17 15:08:08.885: INFO: Waiting for Pod statefulset-6633/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 17 15:08:08.885: INFO: Waiting for Pod statefulset-6633/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 17 15:08:18.797: INFO: Waiting for StatefulSet statefulset-6633/ss2 to complete update
Dec 17 15:08:18.797: INFO: Waiting for Pod statefulset-6633/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 17 15:08:28.807: INFO: Waiting for StatefulSet statefulset-6633/ss2 to complete update
STEP: Rolling back to a previous revision
Dec 17 15:08:38.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6633 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 17 15:08:39.288: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 17 15:08:39.288: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 17 15:08:39.288: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 17 15:08:49.951: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Dec 17 15:08:59.993: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6633 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 17 15:09:00.560: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Dec 17 15:09:00.560: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 17 15:09:00.560: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 17 15:09:10.633: INFO: Waiting for StatefulSet statefulset-6633/ss2 to complete update
Dec 17 15:09:10.634: INFO: Waiting for Pod statefulset-6633/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 17 15:09:10.634: INFO: Waiting for Pod statefulset-6633/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 17 15:09:20.648: INFO: Waiting for StatefulSet statefulset-6633/ss2 to complete update
Dec 17 15:09:20.648: INFO: Waiting for Pod statefulset-6633/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 17 15:09:20.648: INFO: Waiting for Pod statefulset-6633/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 17 15:09:30.669: INFO: Waiting for StatefulSet statefulset-6633/ss2 to complete update
Dec 17 15:09:30.669: INFO: Waiting for Pod statefulset-6633/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 17 15:09:40.653: INFO: Waiting for StatefulSet statefulset-6633/ss2 to complete update
Dec 17 15:09:40.653: INFO: Waiting for Pod statefulset-6633/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 17 15:09:50.658: INFO: Waiting for StatefulSet statefulset-6633/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Dec 17 15:10:00.664: INFO: Deleting all statefulset in ns statefulset-6633
Dec 17 15:10:00.671: INFO: Scaling statefulset ss2 to 0
Dec 17 15:10:40.714: INFO: Waiting for statefulset status.replicas updated to 0
Dec 17 15:10:40.722: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 15:10:40.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6633" for this suite.
Dec 17 15:10:48.856: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 15:10:48.957: INFO: namespace statefulset-6633 deletion completed in 8.190689346s

• [SLOW TEST:241.824 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 15:10:48.958: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-a1c43827-6afb-4714-ba93-dd058a71d7ed
STEP: Creating a pod to test consume secrets
Dec 17 15:10:49.170: INFO: Waiting up to 5m0s for pod "pod-secrets-5706241d-9b2d-4ba3-b714-08bd2c0571c7" in namespace "secrets-2424" to be "success or failure"
Dec 17 15:10:49.185: INFO: Pod "pod-secrets-5706241d-9b2d-4ba3-b714-08bd2c0571c7": Phase="Pending", Reason="", readiness=false. Elapsed: 14.179674ms
Dec 17 15:10:51.190: INFO: Pod "pod-secrets-5706241d-9b2d-4ba3-b714-08bd2c0571c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020102086s
Dec 17 15:10:53.202: INFO: Pod "pod-secrets-5706241d-9b2d-4ba3-b714-08bd2c0571c7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031265268s
Dec 17 15:10:55.214: INFO: Pod "pod-secrets-5706241d-9b2d-4ba3-b714-08bd2c0571c7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043675076s
Dec 17 15:10:57.240: INFO: Pod "pod-secrets-5706241d-9b2d-4ba3-b714-08bd2c0571c7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.070072482s
Dec 17 15:10:59.248: INFO: Pod "pod-secrets-5706241d-9b2d-4ba3-b714-08bd2c0571c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.077425505s
STEP: Saw pod success
Dec 17 15:10:59.248: INFO: Pod "pod-secrets-5706241d-9b2d-4ba3-b714-08bd2c0571c7" satisfied condition "success or failure"
Dec 17 15:10:59.251: INFO: Trying to get logs from node iruya-node pod pod-secrets-5706241d-9b2d-4ba3-b714-08bd2c0571c7 container secret-env-test: 
STEP: delete the pod
Dec 17 15:10:59.361: INFO: Waiting for pod pod-secrets-5706241d-9b2d-4ba3-b714-08bd2c0571c7 to disappear
Dec 17 15:10:59.366: INFO: Pod pod-secrets-5706241d-9b2d-4ba3-b714-08bd2c0571c7 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 15:10:59.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2424" for this suite.
Dec 17 15:11:05.399: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 15:11:05.535: INFO: namespace secrets-2424 deletion completed in 6.163248596s

• [SLOW TEST:16.577 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 15:11:05.537: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-ca49aa6c-2ec5-4c28-bba9-600911ad07bc
STEP: Creating a pod to test consume configMaps
Dec 17 15:11:05.660: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-79b838e6-7127-4926-a6dd-d4cccedd1a8f" in namespace "projected-2294" to be "success or failure"
Dec 17 15:11:05.671: INFO: Pod "pod-projected-configmaps-79b838e6-7127-4926-a6dd-d4cccedd1a8f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.893182ms
Dec 17 15:11:07.680: INFO: Pod "pod-projected-configmaps-79b838e6-7127-4926-a6dd-d4cccedd1a8f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019447092s
Dec 17 15:11:09.691: INFO: Pod "pod-projected-configmaps-79b838e6-7127-4926-a6dd-d4cccedd1a8f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030791496s
Dec 17 15:11:11.700: INFO: Pod "pod-projected-configmaps-79b838e6-7127-4926-a6dd-d4cccedd1a8f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039232802s
Dec 17 15:11:13.714: INFO: Pod "pod-projected-configmaps-79b838e6-7127-4926-a6dd-d4cccedd1a8f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.053756248s
STEP: Saw pod success
Dec 17 15:11:13.714: INFO: Pod "pod-projected-configmaps-79b838e6-7127-4926-a6dd-d4cccedd1a8f" satisfied condition "success or failure"
Dec 17 15:11:13.722: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-79b838e6-7127-4926-a6dd-d4cccedd1a8f container projected-configmap-volume-test: 
STEP: delete the pod
Dec 17 15:11:13.885: INFO: Waiting for pod pod-projected-configmaps-79b838e6-7127-4926-a6dd-d4cccedd1a8f to disappear
Dec 17 15:11:13.906: INFO: Pod pod-projected-configmaps-79b838e6-7127-4926-a6dd-d4cccedd1a8f no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 15:11:13.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2294" for this suite.
Dec 17 15:11:19.953: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 15:11:20.064: INFO: namespace projected-2294 deletion completed in 6.148556355s

• [SLOW TEST:14.528 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 15:11:20.065: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Dec 17 15:11:20.154: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-7484,SelfLink:/api/v1/namespaces/watch-7484/configmaps/e2e-watch-test-resource-version,UID:8b4e2229-59f6-4f88-ada5-0ec62b426b73,ResourceVersion:17029455,Generation:0,CreationTimestamp:2019-12-17 15:11:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 17 15:11:20.154: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-7484,SelfLink:/api/v1/namespaces/watch-7484/configmaps/e2e-watch-test-resource-version,UID:8b4e2229-59f6-4f88-ada5-0ec62b426b73,ResourceVersion:17029456,Generation:0,CreationTimestamp:2019-12-17 15:11:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 15:11:20.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-7484" for this suite.
Dec 17 15:11:26.240: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 15:11:26.348: INFO: namespace watch-7484 deletion completed in 6.186263765s

• [SLOW TEST:6.283 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 15:11:26.349: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-5844
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 17 15:11:26.805: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 17 15:12:05.232: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5844 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 17 15:12:05.233: INFO: >>> kubeConfig: /root/.kube/config
Dec 17 15:12:05.700: INFO: Found all expected endpoints: [netserver-0]
Dec 17 15:12:06.091: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5844 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 17 15:12:06.092: INFO: >>> kubeConfig: /root/.kube/config
Dec 17 15:12:06.549: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 15:12:06.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-5844" for this suite.
Dec 17 15:12:30.620: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 15:12:30.711: INFO: namespace pod-network-test-5844 deletion completed in 24.145888608s

• [SLOW TEST:64.362 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 15:12:30.711: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 17 15:12:30.826: INFO: Waiting up to 5m0s for pod "downwardapi-volume-08966dd7-1698-4ccd-902c-791deb20e4c8" in namespace "projected-4834" to be "success or failure"
Dec 17 15:12:30.836: INFO: Pod "downwardapi-volume-08966dd7-1698-4ccd-902c-791deb20e4c8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.220016ms
Dec 17 15:12:32.905: INFO: Pod "downwardapi-volume-08966dd7-1698-4ccd-902c-791deb20e4c8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078716157s
Dec 17 15:12:34.915: INFO: Pod "downwardapi-volume-08966dd7-1698-4ccd-902c-791deb20e4c8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.088812514s
Dec 17 15:12:36.926: INFO: Pod "downwardapi-volume-08966dd7-1698-4ccd-902c-791deb20e4c8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.100451643s
Dec 17 15:12:38.973: INFO: Pod "downwardapi-volume-08966dd7-1698-4ccd-902c-791deb20e4c8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.147429227s
Dec 17 15:12:40.982: INFO: Pod "downwardapi-volume-08966dd7-1698-4ccd-902c-791deb20e4c8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.155876741s
STEP: Saw pod success
Dec 17 15:12:40.982: INFO: Pod "downwardapi-volume-08966dd7-1698-4ccd-902c-791deb20e4c8" satisfied condition "success or failure"
Dec 17 15:12:40.987: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-08966dd7-1698-4ccd-902c-791deb20e4c8 container client-container: 
STEP: delete the pod
Dec 17 15:12:41.056: INFO: Waiting for pod downwardapi-volume-08966dd7-1698-4ccd-902c-791deb20e4c8 to disappear
Dec 17 15:12:41.066: INFO: Pod downwardapi-volume-08966dd7-1698-4ccd-902c-791deb20e4c8 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 15:12:41.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4834" for this suite.
Dec 17 15:12:49.118: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 15:12:49.212: INFO: namespace projected-4834 deletion completed in 8.118258909s

• [SLOW TEST:18.500 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 15:12:49.213: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Dec 17 15:12:49.322: INFO: Waiting up to 5m0s for pod "pod-2d31a32e-ba93-4f82-ad0b-cbc7224f4c50" in namespace "emptydir-2200" to be "success or failure"
Dec 17 15:12:49.344: INFO: Pod "pod-2d31a32e-ba93-4f82-ad0b-cbc7224f4c50": Phase="Pending", Reason="", readiness=false. Elapsed: 22.617397ms
Dec 17 15:12:51.363: INFO: Pod "pod-2d31a32e-ba93-4f82-ad0b-cbc7224f4c50": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04166892s
Dec 17 15:12:53.383: INFO: Pod "pod-2d31a32e-ba93-4f82-ad0b-cbc7224f4c50": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061303064s
Dec 17 15:12:55.401: INFO: Pod "pod-2d31a32e-ba93-4f82-ad0b-cbc7224f4c50": Phase="Pending", Reason="", readiness=false. Elapsed: 6.078952987s
Dec 17 15:12:57.410: INFO: Pod "pod-2d31a32e-ba93-4f82-ad0b-cbc7224f4c50": Phase="Pending", Reason="", readiness=false. Elapsed: 8.088723396s
Dec 17 15:12:59.418: INFO: Pod "pod-2d31a32e-ba93-4f82-ad0b-cbc7224f4c50": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.096144815s
STEP: Saw pod success
Dec 17 15:12:59.418: INFO: Pod "pod-2d31a32e-ba93-4f82-ad0b-cbc7224f4c50" satisfied condition "success or failure"
Dec 17 15:12:59.421: INFO: Trying to get logs from node iruya-node pod pod-2d31a32e-ba93-4f82-ad0b-cbc7224f4c50 container test-container: 
STEP: delete the pod
Dec 17 15:12:59.536: INFO: Waiting for pod pod-2d31a32e-ba93-4f82-ad0b-cbc7224f4c50 to disappear
Dec 17 15:12:59.544: INFO: Pod pod-2d31a32e-ba93-4f82-ad0b-cbc7224f4c50 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 15:12:59.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2200" for this suite.
Dec 17 15:13:05.637: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 15:13:05.720: INFO: namespace emptydir-2200 deletion completed in 6.169946885s

• [SLOW TEST:16.508 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 15:13:05.721: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W1217 15:13:48.291911       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 17 15:13:48.292: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 15:13:48.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2176" for this suite.
Dec 17 15:14:06.343: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 15:14:06.457: INFO: namespace gc-2176 deletion completed in 18.148215903s

• [SLOW TEST:60.736 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 15:14:06.457: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 15:14:14.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-1471" for this suite.
Dec 17 15:14:56.817: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 15:14:56.964: INFO: namespace kubelet-test-1471 deletion completed in 42.187093661s

• [SLOW TEST:50.507 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 15:14:56.965: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating server pod server in namespace prestop-4020
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-4020
STEP: Deleting pre-stop pod
Dec 17 15:15:20.278: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 15:15:20.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-4020" for this suite.
Dec 17 15:16:00.379: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 15:16:00.483: INFO: namespace prestop-4020 deletion completed in 40.178232408s

• [SLOW TEST:63.519 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 15:16:00.484: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-b51ae39e-aa68-4296-a45f-fbf84775ac9f
STEP: Creating a pod to test consume configMaps
Dec 17 15:16:00.622: INFO: Waiting up to 5m0s for pod "pod-configmaps-285e9c47-505b-4ea6-8f45-3a877d01db56" in namespace "configmap-4391" to be "success or failure"
Dec 17 15:16:00.642: INFO: Pod "pod-configmaps-285e9c47-505b-4ea6-8f45-3a877d01db56": Phase="Pending", Reason="", readiness=false. Elapsed: 19.144977ms
Dec 17 15:16:02.813: INFO: Pod "pod-configmaps-285e9c47-505b-4ea6-8f45-3a877d01db56": Phase="Pending", Reason="", readiness=false. Elapsed: 2.190576501s
Dec 17 15:16:04.836: INFO: Pod "pod-configmaps-285e9c47-505b-4ea6-8f45-3a877d01db56": Phase="Pending", Reason="", readiness=false. Elapsed: 4.213431022s
Dec 17 15:16:06.846: INFO: Pod "pod-configmaps-285e9c47-505b-4ea6-8f45-3a877d01db56": Phase="Pending", Reason="", readiness=false. Elapsed: 6.223422777s
Dec 17 15:16:08.860: INFO: Pod "pod-configmaps-285e9c47-505b-4ea6-8f45-3a877d01db56": Phase="Pending", Reason="", readiness=false. Elapsed: 8.237639721s
Dec 17 15:16:10.873: INFO: Pod "pod-configmaps-285e9c47-505b-4ea6-8f45-3a877d01db56": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.250275917s
STEP: Saw pod success
Dec 17 15:16:10.873: INFO: Pod "pod-configmaps-285e9c47-505b-4ea6-8f45-3a877d01db56" satisfied condition "success or failure"
Dec 17 15:16:10.883: INFO: Trying to get logs from node iruya-node pod pod-configmaps-285e9c47-505b-4ea6-8f45-3a877d01db56 container configmap-volume-test: 
STEP: delete the pod
Dec 17 15:16:10.975: INFO: Waiting for pod pod-configmaps-285e9c47-505b-4ea6-8f45-3a877d01db56 to disappear
Dec 17 15:16:10.984: INFO: Pod pod-configmaps-285e9c47-505b-4ea6-8f45-3a877d01db56 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 15:16:10.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4391" for this suite.
Dec 17 15:16:17.012: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 15:16:17.148: INFO: namespace configmap-4391 deletion completed in 6.155609836s

• [SLOW TEST:16.664 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 15:16:17.149: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W1217 15:16:28.845135       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 17 15:16:28.845: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 15:16:28.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2856" for this suite.
Dec 17 15:16:36.937: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 15:16:37.029: INFO: namespace gc-2856 deletion completed in 8.180593839s

• [SLOW TEST:19.880 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 15:16:37.030: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 17 15:16:37.265: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dcc24a4d-3e02-43a3-8495-e550edac26c8" in namespace "projected-9108" to be "success or failure"
Dec 17 15:16:37.360: INFO: Pod "downwardapi-volume-dcc24a4d-3e02-43a3-8495-e550edac26c8": Phase="Pending", Reason="", readiness=false. Elapsed: 94.302555ms
Dec 17 15:16:39.384: INFO: Pod "downwardapi-volume-dcc24a4d-3e02-43a3-8495-e550edac26c8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.118847275s
Dec 17 15:16:41.402: INFO: Pod "downwardapi-volume-dcc24a4d-3e02-43a3-8495-e550edac26c8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.136782239s
Dec 17 15:16:43.417: INFO: Pod "downwardapi-volume-dcc24a4d-3e02-43a3-8495-e550edac26c8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.151354958s
Dec 17 15:16:45.506: INFO: Pod "downwardapi-volume-dcc24a4d-3e02-43a3-8495-e550edac26c8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.239935542s
Dec 17 15:16:47.519: INFO: Pod "downwardapi-volume-dcc24a4d-3e02-43a3-8495-e550edac26c8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.253846263s
Dec 17 15:16:49.530: INFO: Pod "downwardapi-volume-dcc24a4d-3e02-43a3-8495-e550edac26c8": Phase="Pending", Reason="", readiness=false. Elapsed: 12.264152651s
Dec 17 15:16:51.539: INFO: Pod "downwardapi-volume-dcc24a4d-3e02-43a3-8495-e550edac26c8": Phase="Pending", Reason="", readiness=false. Elapsed: 14.273148172s
Dec 17 15:16:53.547: INFO: Pod "downwardapi-volume-dcc24a4d-3e02-43a3-8495-e550edac26c8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.281479245s
STEP: Saw pod success
Dec 17 15:16:53.547: INFO: Pod "downwardapi-volume-dcc24a4d-3e02-43a3-8495-e550edac26c8" satisfied condition "success or failure"
Dec 17 15:16:53.553: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-dcc24a4d-3e02-43a3-8495-e550edac26c8 container client-container: 
STEP: delete the pod
Dec 17 15:16:53.690: INFO: Waiting for pod downwardapi-volume-dcc24a4d-3e02-43a3-8495-e550edac26c8 to disappear
Dec 17 15:16:53.715: INFO: Pod downwardapi-volume-dcc24a4d-3e02-43a3-8495-e550edac26c8 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 15:16:53.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9108" for this suite.
Dec 17 15:16:59.754: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 15:16:59.961: INFO: namespace projected-9108 deletion completed in 6.229879788s

• [SLOW TEST:22.932 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 15:16:59.962: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-5588, will wait for the garbage collector to delete the pods
Dec 17 15:17:10.184: INFO: Deleting Job.batch foo took: 12.992019ms
Dec 17 15:17:10.485: INFO: Terminating Job.batch foo pods took: 300.814811ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 15:17:56.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-5588" for this suite.
Dec 17 15:18:02.720: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 15:18:02.837: INFO: namespace job-5588 deletion completed in 6.138290931s

• [SLOW TEST:62.875 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 15:18:02.837: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating secret secrets-4663/secret-test-2884509d-68ef-4f66-9757-95c95f58c5dc
STEP: Creating a pod to test consume secrets
Dec 17 15:18:02.980: INFO: Waiting up to 5m0s for pod "pod-configmaps-ab6eda22-0edf-4b75-93d6-7859f4227eb4" in namespace "secrets-4663" to be "success or failure"
Dec 17 15:18:03.003: INFO: Pod "pod-configmaps-ab6eda22-0edf-4b75-93d6-7859f4227eb4": Phase="Pending", Reason="", readiness=false. Elapsed: 22.590353ms
Dec 17 15:18:05.011: INFO: Pod "pod-configmaps-ab6eda22-0edf-4b75-93d6-7859f4227eb4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030474429s
Dec 17 15:18:07.018: INFO: Pod "pod-configmaps-ab6eda22-0edf-4b75-93d6-7859f4227eb4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038247731s
Dec 17 15:18:09.025: INFO: Pod "pod-configmaps-ab6eda22-0edf-4b75-93d6-7859f4227eb4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044777336s
Dec 17 15:18:11.035: INFO: Pod "pod-configmaps-ab6eda22-0edf-4b75-93d6-7859f4227eb4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.054950478s
STEP: Saw pod success
Dec 17 15:18:11.035: INFO: Pod "pod-configmaps-ab6eda22-0edf-4b75-93d6-7859f4227eb4" satisfied condition "success or failure"
Dec 17 15:18:11.038: INFO: Trying to get logs from node iruya-node pod pod-configmaps-ab6eda22-0edf-4b75-93d6-7859f4227eb4 container env-test: 
STEP: delete the pod
Dec 17 15:18:11.151: INFO: Waiting for pod pod-configmaps-ab6eda22-0edf-4b75-93d6-7859f4227eb4 to disappear
Dec 17 15:18:11.157: INFO: Pod pod-configmaps-ab6eda22-0edf-4b75-93d6-7859f4227eb4 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 15:18:11.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4663" for this suite.
Dec 17 15:18:17.221: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 15:18:17.407: INFO: namespace secrets-4663 deletion completed in 6.245544909s

• [SLOW TEST:14.570 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 15:18:17.408: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 15:18:17.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-8449" for this suite.
Dec 17 15:18:23.877: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 15:18:24.054: INFO: namespace kubelet-test-8449 deletion completed in 6.230141604s

• [SLOW TEST:6.647 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 15:18:24.055: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-1531/configmap-test-5d13f5a9-666e-47e9-9f27-edbf412fc609
STEP: Creating a pod to test consume configMaps
Dec 17 15:18:24.210: INFO: Waiting up to 5m0s for pod "pod-configmaps-cb49419f-6311-42ed-b6cb-f7527bc29595" in namespace "configmap-1531" to be "success or failure"
Dec 17 15:18:24.217: INFO: Pod "pod-configmaps-cb49419f-6311-42ed-b6cb-f7527bc29595": Phase="Pending", Reason="", readiness=false. Elapsed: 6.366252ms
Dec 17 15:18:26.227: INFO: Pod "pod-configmaps-cb49419f-6311-42ed-b6cb-f7527bc29595": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016201807s
Dec 17 15:18:28.240: INFO: Pod "pod-configmaps-cb49419f-6311-42ed-b6cb-f7527bc29595": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030078491s
Dec 17 15:18:30.252: INFO: Pod "pod-configmaps-cb49419f-6311-42ed-b6cb-f7527bc29595": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042028506s
Dec 17 15:18:32.273: INFO: Pod "pod-configmaps-cb49419f-6311-42ed-b6cb-f7527bc29595": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.06279416s
STEP: Saw pod success
Dec 17 15:18:32.274: INFO: Pod "pod-configmaps-cb49419f-6311-42ed-b6cb-f7527bc29595" satisfied condition "success or failure"
Dec 17 15:18:32.282: INFO: Trying to get logs from node iruya-node pod pod-configmaps-cb49419f-6311-42ed-b6cb-f7527bc29595 container env-test: 
STEP: delete the pod
Dec 17 15:18:32.580: INFO: Waiting for pod pod-configmaps-cb49419f-6311-42ed-b6cb-f7527bc29595 to disappear
Dec 17 15:18:32.597: INFO: Pod pod-configmaps-cb49419f-6311-42ed-b6cb-f7527bc29595 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 15:18:32.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1531" for this suite.
Dec 17 15:18:38.762: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 15:18:38.882: INFO: namespace configmap-1531 deletion completed in 6.270118507s

• [SLOW TEST:14.827 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 15:18:38.882: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-21b490a2-3155-41d6-b55e-27888747efcb
STEP: Creating a pod to test consume configMaps
Dec 17 15:18:38.965: INFO: Waiting up to 5m0s for pod "pod-configmaps-88b77eec-46aa-4193-acfd-1ddca629fdbe" in namespace "configmap-8425" to be "success or failure"
Dec 17 15:18:39.048: INFO: Pod "pod-configmaps-88b77eec-46aa-4193-acfd-1ddca629fdbe": Phase="Pending", Reason="", readiness=false. Elapsed: 83.415383ms
Dec 17 15:18:41.055: INFO: Pod "pod-configmaps-88b77eec-46aa-4193-acfd-1ddca629fdbe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089912571s
Dec 17 15:18:43.062: INFO: Pod "pod-configmaps-88b77eec-46aa-4193-acfd-1ddca629fdbe": Phase="Pending", Reason="", readiness=false. Elapsed: 4.096900114s
Dec 17 15:18:45.069: INFO: Pod "pod-configmaps-88b77eec-46aa-4193-acfd-1ddca629fdbe": Phase="Pending", Reason="", readiness=false. Elapsed: 6.104270023s
Dec 17 15:18:47.075: INFO: Pod "pod-configmaps-88b77eec-46aa-4193-acfd-1ddca629fdbe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.110529115s
STEP: Saw pod success
Dec 17 15:18:47.076: INFO: Pod "pod-configmaps-88b77eec-46aa-4193-acfd-1ddca629fdbe" satisfied condition "success or failure"
Dec 17 15:18:47.080: INFO: Trying to get logs from node iruya-node pod pod-configmaps-88b77eec-46aa-4193-acfd-1ddca629fdbe container configmap-volume-test: 
STEP: delete the pod
Dec 17 15:18:47.138: INFO: Waiting for pod pod-configmaps-88b77eec-46aa-4193-acfd-1ddca629fdbe to disappear
Dec 17 15:18:47.146: INFO: Pod pod-configmaps-88b77eec-46aa-4193-acfd-1ddca629fdbe no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 15:18:47.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8425" for this suite.
Dec 17 15:18:53.196: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 15:18:53.327: INFO: namespace configmap-8425 deletion completed in 6.176201149s

• [SLOW TEST:14.445 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 15:18:53.329: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-63bd3a31-428d-40c9-8976-8f940d58d915
STEP: Creating a pod to test consume configMaps
Dec 17 15:18:53.425: INFO: Waiting up to 5m0s for pod "pod-configmaps-6ad26ca1-7cfe-4637-8dc9-101e64c5f5bc" in namespace "configmap-3822" to be "success or failure"
Dec 17 15:18:53.468: INFO: Pod "pod-configmaps-6ad26ca1-7cfe-4637-8dc9-101e64c5f5bc": Phase="Pending", Reason="", readiness=false. Elapsed: 43.043459ms
Dec 17 15:18:55.480: INFO: Pod "pod-configmaps-6ad26ca1-7cfe-4637-8dc9-101e64c5f5bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055131725s
Dec 17 15:18:57.489: INFO: Pod "pod-configmaps-6ad26ca1-7cfe-4637-8dc9-101e64c5f5bc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064406471s
Dec 17 15:18:59.499: INFO: Pod "pod-configmaps-6ad26ca1-7cfe-4637-8dc9-101e64c5f5bc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.073751457s
Dec 17 15:19:01.506: INFO: Pod "pod-configmaps-6ad26ca1-7cfe-4637-8dc9-101e64c5f5bc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.081176487s
Dec 17 15:19:03.518: INFO: Pod "pod-configmaps-6ad26ca1-7cfe-4637-8dc9-101e64c5f5bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.093176099s
STEP: Saw pod success
Dec 17 15:19:03.518: INFO: Pod "pod-configmaps-6ad26ca1-7cfe-4637-8dc9-101e64c5f5bc" satisfied condition "success or failure"
Dec 17 15:19:03.523: INFO: Trying to get logs from node iruya-node pod pod-configmaps-6ad26ca1-7cfe-4637-8dc9-101e64c5f5bc container configmap-volume-test: 
STEP: delete the pod
Dec 17 15:19:03.894: INFO: Waiting for pod pod-configmaps-6ad26ca1-7cfe-4637-8dc9-101e64c5f5bc to disappear
Dec 17 15:19:03.904: INFO: Pod pod-configmaps-6ad26ca1-7cfe-4637-8dc9-101e64c5f5bc no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 15:19:03.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3822" for this suite.
Dec 17 15:19:09.965: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 15:19:10.082: INFO: namespace configmap-3822 deletion completed in 6.170887044s

• [SLOW TEST:16.754 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 15:19:10.083: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 17 15:19:10.162: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f5b42d16-d620-4a2e-bedd-f3df3b6d0209" in namespace "downward-api-8379" to be "success or failure"
Dec 17 15:19:10.210: INFO: Pod "downwardapi-volume-f5b42d16-d620-4a2e-bedd-f3df3b6d0209": Phase="Pending", Reason="", readiness=false. Elapsed: 47.489843ms
Dec 17 15:19:12.217: INFO: Pod "downwardapi-volume-f5b42d16-d620-4a2e-bedd-f3df3b6d0209": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054291737s
Dec 17 15:19:14.226: INFO: Pod "downwardapi-volume-f5b42d16-d620-4a2e-bedd-f3df3b6d0209": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063731208s
Dec 17 15:19:16.235: INFO: Pod "downwardapi-volume-f5b42d16-d620-4a2e-bedd-f3df3b6d0209": Phase="Pending", Reason="", readiness=false. Elapsed: 6.071949771s
Dec 17 15:19:18.245: INFO: Pod "downwardapi-volume-f5b42d16-d620-4a2e-bedd-f3df3b6d0209": Phase="Pending", Reason="", readiness=false. Elapsed: 8.082464554s
Dec 17 15:19:20.255: INFO: Pod "downwardapi-volume-f5b42d16-d620-4a2e-bedd-f3df3b6d0209": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.092583714s
STEP: Saw pod success
Dec 17 15:19:20.255: INFO: Pod "downwardapi-volume-f5b42d16-d620-4a2e-bedd-f3df3b6d0209" satisfied condition "success or failure"
Dec 17 15:19:20.262: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-f5b42d16-d620-4a2e-bedd-f3df3b6d0209 container client-container: 
STEP: delete the pod
Dec 17 15:19:20.345: INFO: Waiting for pod downwardapi-volume-f5b42d16-d620-4a2e-bedd-f3df3b6d0209 to disappear
Dec 17 15:19:20.355: INFO: Pod downwardapi-volume-f5b42d16-d620-4a2e-bedd-f3df3b6d0209 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 15:19:20.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8379" for this suite.
Dec 17 15:19:26.394: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 15:19:26.568: INFO: namespace downward-api-8379 deletion completed in 6.202769912s

• [SLOW TEST:16.485 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 15:19:26.568: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-3a089d97-99f5-436e-9a9c-8d7baa57799f
STEP: Creating a pod to test consume configMaps
Dec 17 15:19:26.676: INFO: Waiting up to 5m0s for pod "pod-configmaps-9b9fdd7a-ba32-40e8-b0a1-705ee907abef" in namespace "configmap-9019" to be "success or failure"
Dec 17 15:19:26.681: INFO: Pod "pod-configmaps-9b9fdd7a-ba32-40e8-b0a1-705ee907abef": Phase="Pending", Reason="", readiness=false. Elapsed: 5.799245ms
Dec 17 15:19:28.693: INFO: Pod "pod-configmaps-9b9fdd7a-ba32-40e8-b0a1-705ee907abef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017690605s
Dec 17 15:19:30.704: INFO: Pod "pod-configmaps-9b9fdd7a-ba32-40e8-b0a1-705ee907abef": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028470849s
Dec 17 15:19:32.714: INFO: Pod "pod-configmaps-9b9fdd7a-ba32-40e8-b0a1-705ee907abef": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038208004s
Dec 17 15:19:34.724: INFO: Pod "pod-configmaps-9b9fdd7a-ba32-40e8-b0a1-705ee907abef": Phase="Running", Reason="", readiness=true. Elapsed: 8.048103194s
Dec 17 15:19:36.735: INFO: Pod "pod-configmaps-9b9fdd7a-ba32-40e8-b0a1-705ee907abef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.059682522s
STEP: Saw pod success
Dec 17 15:19:36.736: INFO: Pod "pod-configmaps-9b9fdd7a-ba32-40e8-b0a1-705ee907abef" satisfied condition "success or failure"
Dec 17 15:19:36.739: INFO: Trying to get logs from node iruya-node pod pod-configmaps-9b9fdd7a-ba32-40e8-b0a1-705ee907abef container configmap-volume-test: 
STEP: delete the pod
Dec 17 15:19:36.995: INFO: Waiting for pod pod-configmaps-9b9fdd7a-ba32-40e8-b0a1-705ee907abef to disappear
Dec 17 15:19:37.050: INFO: Pod pod-configmaps-9b9fdd7a-ba32-40e8-b0a1-705ee907abef no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 15:19:37.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9019" for this suite.
Dec 17 15:19:43.137: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 15:19:43.295: INFO: namespace configmap-9019 deletion completed in 6.234434703s

• [SLOW TEST:16.727 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 15:19:43.296: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-cb9b800f-5383-4061-935f-372d5b6f0672 in namespace container-probe-3978
Dec 17 15:19:51.473: INFO: Started pod liveness-cb9b800f-5383-4061-935f-372d5b6f0672 in namespace container-probe-3978
STEP: checking the pod's current state and verifying that restartCount is present
Dec 17 15:19:51.484: INFO: Initial restart count of pod liveness-cb9b800f-5383-4061-935f-372d5b6f0672 is 0
Dec 17 15:20:19.655: INFO: Restart count of pod container-probe-3978/liveness-cb9b800f-5383-4061-935f-372d5b6f0672 is now 1 (28.171179104s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 15:20:19.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3978" for this suite.
Dec 17 15:20:25.735: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 15:20:25.866: INFO: namespace container-probe-3978 deletion completed in 6.156099052s

• [SLOW TEST:42.571 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 15:20:25.867: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 17 15:20:26.029: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f1c897ad-93a1-4866-95c2-51d2dbec22a9" in namespace "downward-api-5505" to be "success or failure"
Dec 17 15:20:26.054: INFO: Pod "downwardapi-volume-f1c897ad-93a1-4866-95c2-51d2dbec22a9": Phase="Pending", Reason="", readiness=false. Elapsed: 25.218382ms
Dec 17 15:20:28.098: INFO: Pod "downwardapi-volume-f1c897ad-93a1-4866-95c2-51d2dbec22a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069154982s
Dec 17 15:20:30.110: INFO: Pod "downwardapi-volume-f1c897ad-93a1-4866-95c2-51d2dbec22a9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081471838s
Dec 17 15:20:32.117: INFO: Pod "downwardapi-volume-f1c897ad-93a1-4866-95c2-51d2dbec22a9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.088327775s
Dec 17 15:20:34.123: INFO: Pod "downwardapi-volume-f1c897ad-93a1-4866-95c2-51d2dbec22a9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.094382925s
Dec 17 15:20:36.132: INFO: Pod "downwardapi-volume-f1c897ad-93a1-4866-95c2-51d2dbec22a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.103133085s
STEP: Saw pod success
Dec 17 15:20:36.132: INFO: Pod "downwardapi-volume-f1c897ad-93a1-4866-95c2-51d2dbec22a9" satisfied condition "success or failure"
Dec 17 15:20:36.135: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-f1c897ad-93a1-4866-95c2-51d2dbec22a9 container client-container: 
STEP: delete the pod
Dec 17 15:20:36.622: INFO: Waiting for pod downwardapi-volume-f1c897ad-93a1-4866-95c2-51d2dbec22a9 to disappear
Dec 17 15:20:36.635: INFO: Pod downwardapi-volume-f1c897ad-93a1-4866-95c2-51d2dbec22a9 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 15:20:36.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5505" for this suite.
Dec 17 15:20:42.673: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 15:20:42.768: INFO: namespace downward-api-5505 deletion completed in 6.124759746s

• [SLOW TEST:16.901 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 15:20:42.768: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 17 15:20:42.909: INFO: (0) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 28.216282ms)
Dec 17 15:20:42.917: INFO: (1) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.008108ms)
Dec 17 15:20:42.923: INFO: (2) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.828952ms)
Dec 17 15:20:42.927: INFO: (3) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.059293ms)
Dec 17 15:20:42.931: INFO: (4) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.988658ms)
Dec 17 15:20:42.939: INFO: (5) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.669339ms)
Dec 17 15:20:42.944: INFO: (6) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.136319ms)
Dec 17 15:20:42.948: INFO: (7) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.125342ms)
Dec 17 15:20:42.954: INFO: (8) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.019423ms)
Dec 17 15:20:42.959: INFO: (9) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.856105ms)
Dec 17 15:20:42.964: INFO: (10) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.355426ms)
Dec 17 15:20:42.973: INFO: (11) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.314041ms)
Dec 17 15:20:42.981: INFO: (12) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.889232ms)
Dec 17 15:20:42.985: INFO: (13) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.770695ms)
Dec 17 15:20:42.989: INFO: (14) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.215796ms)
Dec 17 15:20:42.991: INFO: (15) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 2.662455ms)
Dec 17 15:20:42.996: INFO: (16) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.528956ms)
Dec 17 15:20:43.003: INFO: (17) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.167451ms)
Dec 17 15:20:43.008: INFO: (18) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.896041ms)
Dec 17 15:20:43.013: INFO: (19) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.801952ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 15:20:43.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-1401" for this suite.
Dec 17 15:20:49.036: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 15:20:49.111: INFO: namespace proxy-1401 deletion completed in 6.094636592s

• [SLOW TEST:6.343 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 15:20:49.112: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 15:21:22.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-3541" for this suite.
Dec 17 15:21:28.502: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 15:21:28.633: INFO: namespace namespaces-3541 deletion completed in 6.170654599s
STEP: Destroying namespace "nsdeletetest-7688" for this suite.
Dec 17 15:21:28.636: INFO: Namespace nsdeletetest-7688 was already deleted
STEP: Destroying namespace "nsdeletetest-5298" for this suite.
Dec 17 15:21:34.669: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 15:21:34.819: INFO: namespace nsdeletetest-5298 deletion completed in 6.182029441s

• [SLOW TEST:45.707 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 15:21:34.820: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Dec 17 15:21:35.029: INFO: Waiting up to 5m0s for pod "pod-239fa52d-efab-49dd-a38f-53c1feb1310b" in namespace "emptydir-2110" to be "success or failure"
Dec 17 15:21:35.073: INFO: Pod "pod-239fa52d-efab-49dd-a38f-53c1feb1310b": Phase="Pending", Reason="", readiness=false. Elapsed: 43.723536ms
Dec 17 15:21:37.111: INFO: Pod "pod-239fa52d-efab-49dd-a38f-53c1feb1310b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081541198s
Dec 17 15:21:39.123: INFO: Pod "pod-239fa52d-efab-49dd-a38f-53c1feb1310b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.093768501s
Dec 17 15:21:41.160: INFO: Pod "pod-239fa52d-efab-49dd-a38f-53c1feb1310b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.130601251s
Dec 17 15:21:43.175: INFO: Pod "pod-239fa52d-efab-49dd-a38f-53c1feb1310b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.14562908s
Dec 17 15:21:45.190: INFO: Pod "pod-239fa52d-efab-49dd-a38f-53c1feb1310b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.161280865s
STEP: Saw pod success
Dec 17 15:21:45.191: INFO: Pod "pod-239fa52d-efab-49dd-a38f-53c1feb1310b" satisfied condition "success or failure"
Dec 17 15:21:45.194: INFO: Trying to get logs from node iruya-node pod pod-239fa52d-efab-49dd-a38f-53c1feb1310b container test-container: 
STEP: delete the pod
Dec 17 15:21:45.389: INFO: Waiting for pod pod-239fa52d-efab-49dd-a38f-53c1feb1310b to disappear
Dec 17 15:21:45.435: INFO: Pod pod-239fa52d-efab-49dd-a38f-53c1feb1310b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 15:21:45.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2110" for this suite.
Dec 17 15:21:51.462: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 15:21:51.652: INFO: namespace emptydir-2110 deletion completed in 6.212299759s

• [SLOW TEST:16.833 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 15:21:51.653: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-88f0ce74-6481-413e-8ae5-df3fd80e7730
STEP: Creating a pod to test consume configMaps
Dec 17 15:21:51.768: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2344bc14-2fc5-40de-a161-02bf09872671" in namespace "projected-4594" to be "success or failure"
Dec 17 15:21:51.794: INFO: Pod "pod-projected-configmaps-2344bc14-2fc5-40de-a161-02bf09872671": Phase="Pending", Reason="", readiness=false. Elapsed: 25.990941ms
Dec 17 15:21:53.805: INFO: Pod "pod-projected-configmaps-2344bc14-2fc5-40de-a161-02bf09872671": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036977496s
Dec 17 15:21:55.828: INFO: Pod "pod-projected-configmaps-2344bc14-2fc5-40de-a161-02bf09872671": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059874875s
Dec 17 15:21:57.867: INFO: Pod "pod-projected-configmaps-2344bc14-2fc5-40de-a161-02bf09872671": Phase="Pending", Reason="", readiness=false. Elapsed: 6.09955337s
Dec 17 15:21:59.889: INFO: Pod "pod-projected-configmaps-2344bc14-2fc5-40de-a161-02bf09872671": Phase="Pending", Reason="", readiness=false. Elapsed: 8.120720846s
Dec 17 15:22:01.900: INFO: Pod "pod-projected-configmaps-2344bc14-2fc5-40de-a161-02bf09872671": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.132358978s
STEP: Saw pod success
Dec 17 15:22:01.900: INFO: Pod "pod-projected-configmaps-2344bc14-2fc5-40de-a161-02bf09872671" satisfied condition "success or failure"
Dec 17 15:22:01.905: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-2344bc14-2fc5-40de-a161-02bf09872671 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 17 15:22:02.188: INFO: Waiting for pod pod-projected-configmaps-2344bc14-2fc5-40de-a161-02bf09872671 to disappear
Dec 17 15:22:02.205: INFO: Pod pod-projected-configmaps-2344bc14-2fc5-40de-a161-02bf09872671 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 15:22:02.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4594" for this suite.
Dec 17 15:22:08.283: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 15:22:08.438: INFO: namespace projected-4594 deletion completed in 6.225521777s

• [SLOW TEST:16.785 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 15:22:08.439: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Dec 17 15:22:08.686: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-5748,SelfLink:/api/v1/namespaces/watch-5748/configmaps/e2e-watch-test-watch-closed,UID:01e8721c-a17a-4232-b924-7f2163f212dc,ResourceVersion:17031222,Generation:0,CreationTimestamp:2019-12-17 15:22:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 17 15:22:08.686: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-5748,SelfLink:/api/v1/namespaces/watch-5748/configmaps/e2e-watch-test-watch-closed,UID:01e8721c-a17a-4232-b924-7f2163f212dc,ResourceVersion:17031223,Generation:0,CreationTimestamp:2019-12-17 15:22:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Dec 17 15:22:08.707: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-5748,SelfLink:/api/v1/namespaces/watch-5748/configmaps/e2e-watch-test-watch-closed,UID:01e8721c-a17a-4232-b924-7f2163f212dc,ResourceVersion:17031224,Generation:0,CreationTimestamp:2019-12-17 15:22:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 17 15:22:08.707: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-5748,SelfLink:/api/v1/namespaces/watch-5748/configmaps/e2e-watch-test-watch-closed,UID:01e8721c-a17a-4232-b924-7f2163f212dc,ResourceVersion:17031225,Generation:0,CreationTimestamp:2019-12-17 15:22:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 15:22:08.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-5748" for this suite.
Dec 17 15:22:14.728: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 15:22:14.845: INFO: namespace watch-5748 deletion completed in 6.134290723s

• [SLOW TEST:6.407 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 15:22:14.846: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Dec 17 15:22:24.118: INFO: Successfully updated pod "pod-update-3c2ac469-a0b9-460f-b47e-05e0a8e09c73"
STEP: verifying the updated pod is in kubernetes
Dec 17 15:22:24.138: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 15:22:24.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7272" for this suite.
Dec 17 15:22:46.167: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 15:22:46.305: INFO: namespace pods-7272 deletion completed in 22.161724038s

• [SLOW TEST:31.459 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 15:22:46.305: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Dec 17 15:22:46.390: INFO: Waiting up to 5m0s for pod "pod-3cab9ccd-af33-47d8-a3ea-780a5452578e" in namespace "emptydir-5657" to be "success or failure"
Dec 17 15:22:46.398: INFO: Pod "pod-3cab9ccd-af33-47d8-a3ea-780a5452578e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.015907ms
Dec 17 15:22:48.414: INFO: Pod "pod-3cab9ccd-af33-47d8-a3ea-780a5452578e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024070663s
Dec 17 15:22:50.430: INFO: Pod "pod-3cab9ccd-af33-47d8-a3ea-780a5452578e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039730328s
Dec 17 15:22:52.442: INFO: Pod "pod-3cab9ccd-af33-47d8-a3ea-780a5452578e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051385523s
Dec 17 15:22:54.460: INFO: Pod "pod-3cab9ccd-af33-47d8-a3ea-780a5452578e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.069708815s
Dec 17 15:22:56.472: INFO: Pod "pod-3cab9ccd-af33-47d8-a3ea-780a5452578e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.081766631s
STEP: Saw pod success
Dec 17 15:22:56.472: INFO: Pod "pod-3cab9ccd-af33-47d8-a3ea-780a5452578e" satisfied condition "success or failure"
Dec 17 15:22:56.479: INFO: Trying to get logs from node iruya-node pod pod-3cab9ccd-af33-47d8-a3ea-780a5452578e container test-container: 
STEP: delete the pod
Dec 17 15:22:56.604: INFO: Waiting for pod pod-3cab9ccd-af33-47d8-a3ea-780a5452578e to disappear
Dec 17 15:22:56.617: INFO: Pod pod-3cab9ccd-af33-47d8-a3ea-780a5452578e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 15:22:56.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5657" for this suite.
Dec 17 15:23:02.702: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 15:23:02.888: INFO: namespace emptydir-5657 deletion completed in 6.257952012s

• [SLOW TEST:16.583 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 15:23:02.889: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-488
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 17 15:23:02.992: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 17 15:23:43.330: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-488 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 17 15:23:43.330: INFO: >>> kubeConfig: /root/.kube/config
Dec 17 15:23:43.808: INFO: Waiting for endpoints: map[]
Dec 17 15:23:43.816: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-488 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 17 15:23:43.816: INFO: >>> kubeConfig: /root/.kube/config
Dec 17 15:23:44.380: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 15:23:44.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-488" for this suite.
Dec 17 15:24:08.416: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 15:24:08.717: INFO: namespace pod-network-test-488 deletion completed in 24.325047684s

• [SLOW TEST:65.829 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 15:24:08.720: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 17 15:24:08.834: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 15:24:16.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8515" for this suite.
Dec 17 15:25:09.031: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 15:25:09.168: INFO: namespace pods-8515 deletion completed in 52.201183744s

• [SLOW TEST:60.448 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 15:25:09.169: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service multi-endpoint-test in namespace services-4132
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4132 to expose endpoints map[]
Dec 17 15:25:09.327: INFO: Get endpoints failed (37.083435ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Dec 17 15:25:10.363: INFO: successfully validated that service multi-endpoint-test in namespace services-4132 exposes endpoints map[] (1.073058121s elapsed)
STEP: Creating pod pod1 in namespace services-4132
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4132 to expose endpoints map[pod1:[100]]
Dec 17 15:25:14.450: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.065616243s elapsed, will retry)
Dec 17 15:25:19.569: INFO: successfully validated that service multi-endpoint-test in namespace services-4132 exposes endpoints map[pod1:[100]] (9.184198087s elapsed)
STEP: Creating pod pod2 in namespace services-4132
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4132 to expose endpoints map[pod1:[100] pod2:[101]]
Dec 17 15:25:24.755: INFO: Unexpected endpoints: found map[88e12e7a-1d3f-4736-9a79-d3936ebe8ce4:[100]], expected map[pod1:[100] pod2:[101]] (5.159178152s elapsed, will retry)
Dec 17 15:25:27.915: INFO: successfully validated that service multi-endpoint-test in namespace services-4132 exposes endpoints map[pod1:[100] pod2:[101]] (8.319295152s elapsed)
STEP: Deleting pod pod1 in namespace services-4132
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4132 to expose endpoints map[pod2:[101]]
Dec 17 15:25:28.999: INFO: successfully validated that service multi-endpoint-test in namespace services-4132 exposes endpoints map[pod2:[101]] (1.067446616s elapsed)
STEP: Deleting pod pod2 in namespace services-4132
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4132 to expose endpoints map[]
Dec 17 15:25:29.076: INFO: successfully validated that service multi-endpoint-test in namespace services-4132 exposes endpoints map[] (59.813114ms elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 15:25:29.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4132" for this suite.
Dec 17 15:25:51.160: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 15:25:51.343: INFO: namespace services-4132 deletion completed in 22.202454041s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:42.174 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 17 15:25:51.343: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 17 15:25:51.649: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Dec 17 15:25:51.669: INFO: Number of nodes with available pods: 0
Dec 17 15:25:51.669: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Dec 17 15:25:51.796: INFO: Number of nodes with available pods: 0
Dec 17 15:25:51.796: INFO: Node iruya-node is running more than one daemon pod
Dec 17 15:25:52.874: INFO: Number of nodes with available pods: 0
Dec 17 15:25:52.875: INFO: Node iruya-node is running more than one daemon pod
Dec 17 15:25:53.807: INFO: Number of nodes with available pods: 0
Dec 17 15:25:53.807: INFO: Node iruya-node is running more than one daemon pod
Dec 17 15:25:54.820: INFO: Number of nodes with available pods: 0
Dec 17 15:25:54.820: INFO: Node iruya-node is running more than one daemon pod
Dec 17 15:25:55.912: INFO: Number of nodes with available pods: 0
Dec 17 15:25:55.912: INFO: Node iruya-node is running more than one daemon pod
Dec 17 15:25:56.813: INFO: Number of nodes with available pods: 0
Dec 17 15:25:56.814: INFO: Node iruya-node is running more than one daemon pod
Dec 17 15:25:57.811: INFO: Number of nodes with available pods: 0
Dec 17 15:25:57.811: INFO: Node iruya-node is running more than one daemon pod
Dec 17 15:25:58.805: INFO: Number of nodes with available pods: 0
Dec 17 15:25:58.805: INFO: Node iruya-node is running more than one daemon pod
Dec 17 15:25:59.809: INFO: Number of nodes with available pods: 0
Dec 17 15:25:59.809: INFO: Node iruya-node is running more than one daemon pod
Dec 17 15:26:00.809: INFO: Number of nodes with available pods: 1
Dec 17 15:26:00.809: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Dec 17 15:26:00.862: INFO: Number of nodes with available pods: 1
Dec 17 15:26:00.863: INFO: Number of running nodes: 0, number of available pods: 1
Dec 17 15:26:01.880: INFO: Number of nodes with available pods: 0
Dec 17 15:26:01.880: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Dec 17 15:26:01.906: INFO: Number of nodes with available pods: 0
Dec 17 15:26:01.906: INFO: Node iruya-node is running more than one daemon pod
Dec 17 15:26:03.828: INFO: Number of nodes with available pods: 0
Dec 17 15:26:03.828: INFO: Node iruya-node is running more than one daemon pod
Dec 17 15:26:03.929: INFO: Number of nodes with available pods: 0
Dec 17 15:26:03.930: INFO: Node iruya-node is running more than one daemon pod
Dec 17 15:26:04.913: INFO: Number of nodes with available pods: 0
Dec 17 15:26:04.913: INFO: Node iruya-node is running more than one daemon pod
Dec 17 15:26:05.932: INFO: Number of nodes with available pods: 0
Dec 17 15:26:05.932: INFO: Node iruya-node is running more than one daemon pod
Dec 17 15:26:06.917: INFO: Number of nodes with available pods: 0
Dec 17 15:26:06.917: INFO: Node iruya-node is running more than one daemon pod
Dec 17 15:26:07.916: INFO: Number of nodes with available pods: 0
Dec 17 15:26:07.916: INFO: Node iruya-node is running more than one daemon pod
Dec 17 15:26:08.913: INFO: Number of nodes with available pods: 0
Dec 17 15:26:08.913: INFO: Node iruya-node is running more than one daemon pod
Dec 17 15:26:09.920: INFO: Number of nodes with available pods: 0
Dec 17 15:26:09.920: INFO: Node iruya-node is running more than one daemon pod
Dec 17 15:26:10.921: INFO: Number of nodes with available pods: 0
Dec 17 15:26:10.921: INFO: Node iruya-node is running more than one daemon pod
Dec 17 15:26:11.918: INFO: Number of nodes with available pods: 0
Dec 17 15:26:11.918: INFO: Node iruya-node is running more than one daemon pod
Dec 17 15:26:12.919: INFO: Number of nodes with available pods: 0
Dec 17 15:26:12.919: INFO: Node iruya-node is running more than one daemon pod
Dec 17 15:26:13.922: INFO: Number of nodes with available pods: 0
Dec 17 15:26:13.922: INFO: Node iruya-node is running more than one daemon pod
Dec 17 15:26:14.916: INFO: Number of nodes with available pods: 0
Dec 17 15:26:14.916: INFO: Node iruya-node is running more than one daemon pod
Dec 17 15:26:15.914: INFO: Number of nodes with available pods: 0
Dec 17 15:26:15.914: INFO: Node iruya-node is running more than one daemon pod
Dec 17 15:26:16.927: INFO: Number of nodes with available pods: 0
Dec 17 15:26:16.927: INFO: Node iruya-node is running more than one daemon pod
Dec 17 15:26:17.917: INFO: Number of nodes with available pods: 0
Dec 17 15:26:17.917: INFO: Node iruya-node is running more than one daemon pod
Dec 17 15:26:18.918: INFO: Number of nodes with available pods: 0
Dec 17 15:26:18.918: INFO: Node iruya-node is running more than one daemon pod
Dec 17 15:26:19.917: INFO: Number of nodes with available pods: 0
Dec 17 15:26:19.918: INFO: Node iruya-node is running more than one daemon pod
Dec 17 15:26:20.915: INFO: Number of nodes with available pods: 0
Dec 17 15:26:20.915: INFO: Node iruya-node is running more than one daemon pod
Dec 17 15:26:21.916: INFO: Number of nodes with available pods: 0
Dec 17 15:26:21.916: INFO: Node iruya-node is running more than one daemon pod
Dec 17 15:26:22.914: INFO: Number of nodes with available pods: 0
Dec 17 15:26:22.914: INFO: Node iruya-node is running more than one daemon pod
Dec 17 15:26:23.917: INFO: Number of nodes with available pods: 1
Dec 17 15:26:23.917: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2776, will wait for the garbage collector to delete the pods
Dec 17 15:26:24.054: INFO: Deleting DaemonSet.extensions daemon-set took: 75.127112ms
Dec 17 15:26:24.355: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.580254ms
Dec 17 15:26:31.260: INFO: Number of nodes with available pods: 0
Dec 17 15:26:31.261: INFO: Number of running nodes: 0, number of available pods: 0
Dec 17 15:26:31.263: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2776/daemonsets","resourceVersion":"17031820"},"items":null}

Dec 17 15:26:31.264: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2776/pods","resourceVersion":"17031820"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 17 15:26:31.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-2776" for this suite.
Dec 17 15:26:37.398: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 15:26:37.538: INFO: namespace daemonsets-2776 deletion completed in 6.182878402s

• [SLOW TEST:46.195 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSDec 17 15:26:37.539: INFO: Running AfterSuite actions on all nodes
Dec 17 15:26:37.539: INFO: Running AfterSuite actions on node 1
Dec 17 15:26:37.539: INFO: Skipping dumping logs from cluster


Summarizing 1 Failure:

[Fail] [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] [It] Should recreate evicted statefulset [Conformance] 
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:769

Ran 215 of 4412 Specs in 9027.927 seconds
FAIL! -- 214 Passed | 1 Failed | 0 Pending | 4197 Skipped
--- FAIL: TestE2E (9028.19s)
FAIL