I0514 12:55:57.372807 6 e2e.go:243] Starting e2e run "36f69dbf-5939-4656-8dd2-3f241d0129c0" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1589460956 - Will randomize all specs Will run 215 of 4412 specs May 14 12:55:57.564: INFO: >>> kubeConfig: /root/.kube/config May 14 12:55:57.566: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 14 12:55:57.582: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 14 12:55:57.608: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 14 12:55:57.608: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 14 12:55:57.608: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 14 12:55:57.618: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 14 12:55:57.618: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 14 12:55:57.618: INFO: e2e test version: v1.15.11 May 14 12:55:57.619: INFO: kube-apiserver version: v1.15.7 [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 12:55:57.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected May 14 12:55:57.724: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-2fd01dcb-f418-468f-9971-9e5a3ece7e56 STEP: Creating a pod to test consume configMaps May 14 12:55:57.739: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c76841e5-dbe2-493c-a820-97943df419ef" in namespace "projected-4342" to be "success or failure" May 14 12:55:57.794: INFO: Pod "pod-projected-configmaps-c76841e5-dbe2-493c-a820-97943df419ef": Phase="Pending", Reason="", readiness=false. Elapsed: 54.086613ms May 14 12:55:59.979: INFO: Pod "pod-projected-configmaps-c76841e5-dbe2-493c-a820-97943df419ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.239069277s May 14 12:56:01.984: INFO: Pod "pod-projected-configmaps-c76841e5-dbe2-493c-a820-97943df419ef": Phase="Pending", Reason="", readiness=false. Elapsed: 4.244564938s May 14 12:56:03.990: INFO: Pod "pod-projected-configmaps-c76841e5-dbe2-493c-a820-97943df419ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.25014976s STEP: Saw pod success May 14 12:56:03.990: INFO: Pod "pod-projected-configmaps-c76841e5-dbe2-493c-a820-97943df419ef" satisfied condition "success or failure" May 14 12:56:03.993: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-c76841e5-dbe2-493c-a820-97943df419ef container projected-configmap-volume-test: STEP: delete the pod May 14 12:56:04.016: INFO: Waiting for pod pod-projected-configmaps-c76841e5-dbe2-493c-a820-97943df419ef to disappear May 14 12:56:04.020: INFO: Pod pod-projected-configmaps-c76841e5-dbe2-493c-a820-97943df419ef no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 12:56:04.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4342" for this suite. May 14 12:56:10.051: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 12:56:10.117: INFO: namespace projected-4342 deletion completed in 6.0946483s • [SLOW TEST:12.499 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 12:56:10.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-d7d8e403-0d45-4b9c-a4a3-576b1971f04e in namespace container-probe-3141 May 14 12:56:14.178: INFO: Started pod liveness-d7d8e403-0d45-4b9c-a4a3-576b1971f04e in namespace container-probe-3141 STEP: checking the pod's current state and verifying that restartCount is present May 14 12:56:14.181: INFO: Initial restart count of pod liveness-d7d8e403-0d45-4b9c-a4a3-576b1971f04e is 0 May 14 12:56:40.382: INFO: Restart count of pod container-probe-3141/liveness-d7d8e403-0d45-4b9c-a4a3-576b1971f04e is now 1 (26.200661771s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 12:56:40.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3141" for this suite. May 14 12:56:46.454: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 12:56:46.588: INFO: namespace container-probe-3141 deletion completed in 6.148547366s • [SLOW TEST:36.470 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 12:56:46.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 May 14 12:56:46.656: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 14 12:56:46.676: INFO: Waiting for terminating namespaces to be deleted... May 14 12:56:46.679: INFO: Logging pods the kubelet thinks is on node iruya-worker before test May 14 12:56:46.685: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 14 12:56:46.685: INFO: Container kube-proxy ready: true, restart count 0 May 14 12:56:46.685: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 14 12:56:46.685: INFO: Container kindnet-cni ready: true, restart count 0 May 14 12:56:46.685: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test May 14 12:56:46.716: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) May 14 12:56:46.716: INFO: Container coredns ready: true, restart count 0 May 14 12:56:46.716: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) May 14 12:56:46.716: INFO: Container coredns ready: true, restart count 0 May 14 12:56:46.716: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) May 14 12:56:46.716: INFO: Container kube-proxy ready: true, restart count 0 May 14 12:56:46.716: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) May 14 12:56:46.716: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: verifying the node has the label node iruya-worker STEP: verifying the node has the label node iruya-worker2 May 14 12:56:46.882: INFO: Pod coredns-5d4dd4b4db-6jcgz requesting resource cpu=100m on Node iruya-worker2 May 14 12:56:46.882: INFO: Pod coredns-5d4dd4b4db-gm7vr requesting resource cpu=100m on Node iruya-worker2 May 14 12:56:46.882: INFO: Pod kindnet-gwz5g requesting resource cpu=100m on Node iruya-worker May 14 12:56:46.882: INFO: Pod kindnet-mgd8b requesting resource cpu=100m on Node iruya-worker2 May 14 12:56:46.882: INFO: Pod kube-proxy-pmz4p requesting resource cpu=0m on Node iruya-worker May 14 12:56:46.882: INFO: Pod kube-proxy-vwbcj requesting resource cpu=0m on Node iruya-worker2 STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-711e854a-62fd-46f5-9d7a-4b26025057ff.160ee65d4fb96094], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2203/filler-pod-711e854a-62fd-46f5-9d7a-4b26025057ff to iruya-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-711e854a-62fd-46f5-9d7a-4b26025057ff.160ee65de0201a5c], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-711e854a-62fd-46f5-9d7a-4b26025057ff.160ee65e338416d7], Reason = [Created], Message = [Created container filler-pod-711e854a-62fd-46f5-9d7a-4b26025057ff] STEP: Considering event: Type = [Normal], Name = [filler-pod-711e854a-62fd-46f5-9d7a-4b26025057ff.160ee65e468ed6fb], Reason = [Started], Message = [Started container filler-pod-711e854a-62fd-46f5-9d7a-4b26025057ff] STEP: Considering event: Type = [Normal], Name = [filler-pod-948d0e35-652b-4e99-bbd5-7efbd2b6199a.160ee65d4d365050], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2203/filler-pod-948d0e35-652b-4e99-bbd5-7efbd2b6199a to iruya-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-948d0e35-652b-4e99-bbd5-7efbd2b6199a.160ee65da19d9bd8], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-948d0e35-652b-4e99-bbd5-7efbd2b6199a.160ee65e1f097444], Reason = [Created], Message = [Created container filler-pod-948d0e35-652b-4e99-bbd5-7efbd2b6199a] STEP: Considering event: Type = [Normal], Name = [filler-pod-948d0e35-652b-4e99-bbd5-7efbd2b6199a.160ee65e37d54a7e], Reason = [Started], Message = [Started container filler-pod-948d0e35-652b-4e99-bbd5-7efbd2b6199a] STEP: Considering event: Type = [Warning], Name = [additional-pod.160ee65eb6a9b00e], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node iruya-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node iruya-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 12:56:54.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2203" for this suite. May 14 12:57:00.090: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 12:57:00.274: INFO: namespace sched-pred-2203 deletion completed in 6.235018298s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:13.686 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 12:57:00.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller May 14 12:57:00.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4082' May 14 12:57:04.843: INFO: stderr: "" May 14 12:57:04.843: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 14 12:57:04.843: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4082' May 14 12:57:05.002: INFO: stderr: "" May 14 12:57:05.002: INFO: stdout: "update-demo-nautilus-599x7 update-demo-nautilus-jgffh " May 14 12:57:05.002: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-599x7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4082' May 14 12:57:05.088: INFO: stderr: "" May 14 12:57:05.088: INFO: stdout: "" May 14 12:57:05.088: INFO: update-demo-nautilus-599x7 is created but not running May 14 12:57:10.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4082' May 14 12:57:10.200: INFO: stderr: "" May 14 12:57:10.200: INFO: stdout: "update-demo-nautilus-599x7 update-demo-nautilus-jgffh " May 14 12:57:10.200: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-599x7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4082' May 14 12:57:10.320: INFO: stderr: "" May 14 12:57:10.320: INFO: stdout: "true" May 14 12:57:10.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-599x7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4082' May 14 12:57:10.412: INFO: stderr: "" May 14 12:57:10.412: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 14 12:57:10.412: INFO: validating pod update-demo-nautilus-599x7 May 14 12:57:10.429: INFO: got data: { "image": "nautilus.jpg" } May 14 12:57:10.429: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 14 12:57:10.429: INFO: update-demo-nautilus-599x7 is verified up and running May 14 12:57:10.429: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jgffh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4082' May 14 12:57:10.516: INFO: stderr: "" May 14 12:57:10.517: INFO: stdout: "true" May 14 12:57:10.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jgffh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4082' May 14 12:57:10.606: INFO: stderr: "" May 14 12:57:10.606: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 14 12:57:10.606: INFO: validating pod update-demo-nautilus-jgffh May 14 12:57:10.610: INFO: got data: { "image": "nautilus.jpg" } May 14 12:57:10.610: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 14 12:57:10.610: INFO: update-demo-nautilus-jgffh is verified up and running STEP: scaling down the replication controller May 14 12:57:10.612: INFO: scanned /root for discovery docs: May 14 12:57:10.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-4082' May 14 12:57:11.764: INFO: stderr: "" May 14 12:57:11.764: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 14 12:57:11.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4082' May 14 12:57:11.869: INFO: stderr: "" May 14 12:57:11.869: INFO: stdout: "update-demo-nautilus-599x7 update-demo-nautilus-jgffh " STEP: Replicas for name=update-demo: expected=1 actual=2 May 14 12:57:16.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4082' May 14 12:57:16.961: INFO: stderr: "" May 14 12:57:16.961: INFO: stdout: "update-demo-nautilus-599x7 update-demo-nautilus-jgffh " STEP: Replicas for name=update-demo: expected=1 actual=2 May 14 12:57:21.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4082' May 14 12:57:22.054: INFO: stderr: "" May 14 12:57:22.054: INFO: stdout: "update-demo-nautilus-jgffh " May 14 12:57:22.054: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jgffh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4082' May 14 12:57:22.149: INFO: stderr: "" May 14 12:57:22.149: INFO: stdout: "true" May 14 12:57:22.149: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jgffh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4082' May 14 12:57:22.250: INFO: stderr: "" May 14 12:57:22.250: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 14 12:57:22.250: INFO: validating pod update-demo-nautilus-jgffh May 14 12:57:22.256: INFO: got data: { "image": "nautilus.jpg" } May 14 12:57:22.256: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 14 12:57:22.256: INFO: update-demo-nautilus-jgffh is verified up and running STEP: scaling up the replication controller May 14 12:57:22.259: INFO: scanned /root for discovery docs: May 14 12:57:22.259: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-4082' May 14 12:57:23.415: INFO: stderr: "" May 14 12:57:23.415: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 14 12:57:23.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4082' May 14 12:57:23.507: INFO: stderr: "" May 14 12:57:23.507: INFO: stdout: "update-demo-nautilus-j77b5 update-demo-nautilus-jgffh " May 14 12:57:23.507: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j77b5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4082' May 14 12:57:23.597: INFO: stderr: "" May 14 12:57:23.597: INFO: stdout: "" May 14 12:57:23.597: INFO: update-demo-nautilus-j77b5 is created but not running May 14 12:57:28.597: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4082' May 14 12:57:28.705: INFO: stderr: "" May 14 12:57:28.705: INFO: stdout: "update-demo-nautilus-j77b5 update-demo-nautilus-jgffh " May 14 12:57:28.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j77b5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4082' May 14 12:57:28.806: INFO: stderr: "" May 14 12:57:28.806: INFO: stdout: "true" May 14 12:57:28.806: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j77b5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4082' May 14 12:57:28.902: INFO: stderr: "" May 14 12:57:28.902: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 14 12:57:28.902: INFO: validating pod update-demo-nautilus-j77b5 May 14 12:57:28.906: INFO: got data: { "image": "nautilus.jpg" } May 14 12:57:28.906: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 14 12:57:28.906: INFO: update-demo-nautilus-j77b5 is verified up and running May 14 12:57:28.906: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jgffh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4082' May 14 12:57:28.999: INFO: stderr: "" May 14 12:57:28.999: INFO: stdout: "true" May 14 12:57:29.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jgffh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4082' May 14 12:57:29.086: INFO: stderr: "" May 14 12:57:29.086: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 14 12:57:29.086: INFO: validating pod update-demo-nautilus-jgffh May 14 12:57:29.089: INFO: got data: { "image": "nautilus.jpg" } May 14 12:57:29.089: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 14 12:57:29.089: INFO: update-demo-nautilus-jgffh is verified up and running STEP: using delete to clean up resources May 14 12:57:29.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4082' May 14 12:57:29.194: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 14 12:57:29.194: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 14 12:57:29.194: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4082' May 14 12:57:29.299: INFO: stderr: "No resources found.\n" May 14 12:57:29.300: INFO: stdout: "" May 14 12:57:29.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4082 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 14 12:57:29.383: INFO: stderr: "" May 14 12:57:29.383: INFO: stdout: "update-demo-nautilus-j77b5\nupdate-demo-nautilus-jgffh\n" May 14 12:57:29.883: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4082' May 14 12:57:29.963: INFO: stderr: "No resources found.\n" May 14 12:57:29.963: INFO: stdout: "" May 14 12:57:29.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4082 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 14 12:57:30.049: INFO: stderr: "" May 14 12:57:30.049: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 12:57:30.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4082" for this suite. May 14 12:57:52.067: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 12:57:52.136: INFO: namespace kubectl-4082 deletion completed in 22.083188521s • [SLOW TEST:51.861 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 12:57:52.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-e1360327-7a4b-4e20-809a-a802435a3437 STEP: Creating a pod to test consume secrets May 14 12:57:52.382: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-94a47317-9a44-4e89-896b-4a776a5f8eaa" in namespace "projected-6403" to be "success or failure" May 14 12:57:52.420: INFO: Pod "pod-projected-secrets-94a47317-9a44-4e89-896b-4a776a5f8eaa": Phase="Pending", Reason="", readiness=false. Elapsed: 38.093205ms May 14 12:57:54.499: INFO: Pod "pod-projected-secrets-94a47317-9a44-4e89-896b-4a776a5f8eaa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117256637s May 14 12:57:56.536: INFO: Pod "pod-projected-secrets-94a47317-9a44-4e89-896b-4a776a5f8eaa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.153816759s STEP: Saw pod success May 14 12:57:56.536: INFO: Pod "pod-projected-secrets-94a47317-9a44-4e89-896b-4a776a5f8eaa" satisfied condition "success or failure" May 14 12:57:56.539: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-94a47317-9a44-4e89-896b-4a776a5f8eaa container projected-secret-volume-test: STEP: delete the pod May 14 12:57:56.609: INFO: Waiting for pod pod-projected-secrets-94a47317-9a44-4e89-896b-4a776a5f8eaa to disappear May 14 12:57:56.613: INFO: Pod pod-projected-secrets-94a47317-9a44-4e89-896b-4a776a5f8eaa no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 12:57:56.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6403" for this suite. May 14 12:58:02.719: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 12:58:02.841: INFO: namespace projected-6403 deletion completed in 6.225847726s • [SLOW TEST:10.705 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 12:58:02.842: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 14 12:58:02.884: INFO: Creating deployment "test-recreate-deployment" May 14 12:58:02.895: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 14 12:58:02.956: INFO: deployment "test-recreate-deployment" doesn't have the required revision set May 14 12:58:04.966: INFO: Waiting deployment "test-recreate-deployment" to complete May 14 12:58:04.968: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725057882, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725057882, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725057883, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725057882, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} May 14 12:58:06.971: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725057882, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725057882, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725057883, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725057882, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} May 14 12:58:08.971: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 14 12:58:08.976: INFO: Updating deployment test-recreate-deployment May 14 12:58:08.976: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 May 14 12:58:10.854: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-5629,SelfLink:/apis/apps/v1/namespaces/deployment-5629/deployments/test-recreate-deployment,UID:d30aecdc-ac25-4396-a596-d9ae494df27f,ResourceVersion:10851121,Generation:2,CreationTimestamp:2020-05-14 12:58:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-05-14 12:58:10 +0000 UTC 2020-05-14 12:58:10 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-05-14 12:58:10 +0000 UTC 2020-05-14 12:58:02 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} May 14 12:58:10.966: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-5629,SelfLink:/apis/apps/v1/namespaces/deployment-5629/replicasets/test-recreate-deployment-5c8c9cc69d,UID:1b7caa44-4f4c-40af-b236-b75915c49b40,ResourceVersion:10851120,Generation:1,CreationTimestamp:2020-05-14 12:58:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment d30aecdc-ac25-4396-a596-d9ae494df27f 0xc00196b177 0xc00196b178}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 14 12:58:10.966: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 14 12:58:10.966: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-5629,SelfLink:/apis/apps/v1/namespaces/deployment-5629/replicasets/test-recreate-deployment-6df85df6b9,UID:961e526a-f8b4-4ec2-8c44-6e2ae547fd51,ResourceVersion:10851109,Generation:2,CreationTimestamp:2020-05-14 12:58:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment d30aecdc-ac25-4396-a596-d9ae494df27f 0xc00196b247 0xc00196b248}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 14 12:58:11.024: INFO: Pod "test-recreate-deployment-5c8c9cc69d-hmmcx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-hmmcx,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-5629,SelfLink:/api/v1/namespaces/deployment-5629/pods/test-recreate-deployment-5c8c9cc69d-hmmcx,UID:82e4fcbb-aacd-41de-82a9-c9f7692fad99,ResourceVersion:10851123,Generation:0,CreationTimestamp:2020-05-14 12:58:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 1b7caa44-4f4c-40af-b236-b75915c49b40 0xc00196bb07 0xc00196bb08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9nplz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9nplz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9nplz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00196bb80} {node.kubernetes.io/unreachable Exists NoExecute 0xc00196bba0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:58:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:58:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:58:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:58:10 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-14 12:58:10 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 12:58:11.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5629" for this suite. May 14 12:58:17.139: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 12:58:17.217: INFO: namespace deployment-5629 deletion completed in 6.189906031s • [SLOW TEST:14.375 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 12:58:17.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0514 12:58:27.316246 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 14 12:58:27.316: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 12:58:27.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9493" for this suite. May 14 12:58:33.338: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 12:58:33.418: INFO: namespace gc-9493 deletion completed in 6.098952318s • [SLOW TEST:16.201 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 12:58:33.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-164 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet May 14 12:58:33.531: INFO: Found 0 stateful pods, waiting for 3 May 14 12:58:43.536: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 14 12:58:43.537: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 14 12:58:43.537: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false May 14 12:58:53.542: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 14 12:58:53.543: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 14 12:58:53.543: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine May 14 12:58:53.569: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 14 12:59:03.628: INFO: Updating stateful set ss2 May 14 12:59:03.659: INFO: Waiting for Pod statefulset-164/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted May 14 12:59:13.834: INFO: Found 2 stateful pods, waiting for 3 May 14 12:59:23.838: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 14 12:59:23.838: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 14 12:59:23.838: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update May 14 12:59:23.858: INFO: Updating stateful set ss2 May 14 12:59:23.924: INFO: Waiting for Pod statefulset-164/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 14 12:59:33.947: INFO: Updating stateful set ss2 May 14 12:59:34.078: INFO: Waiting for StatefulSet statefulset-164/ss2 to complete update May 14 12:59:34.078: INFO: Waiting for Pod statefulset-164/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 May 14 12:59:44.087: INFO: Deleting all statefulset in ns statefulset-164 May 14 12:59:44.091: INFO: Scaling statefulset ss2 to 0 May 14 13:00:14.104: INFO: Waiting for statefulset status.replicas updated to 0 May 14 13:00:14.107: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:00:14.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-164" for this suite. May 14 13:00:22.140: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:00:22.232: INFO: namespace statefulset-164 deletion completed in 8.109472765s • [SLOW TEST:108.813 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:00:22.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller May 14 13:00:22.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4677' May 14 13:00:22.586: INFO: stderr: "" May 14 13:00:22.587: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 14 13:00:22.587: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4677' May 14 13:00:22.704: INFO: stderr: "" May 14 13:00:22.704: INFO: stdout: "update-demo-nautilus-bbpl5 update-demo-nautilus-wvt6c " May 14 13:00:22.704: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bbpl5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4677' May 14 13:00:22.798: INFO: stderr: "" May 14 13:00:22.798: INFO: stdout: "" May 14 13:00:22.798: INFO: update-demo-nautilus-bbpl5 is created but not running May 14 13:00:27.798: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4677' May 14 13:00:27.927: INFO: stderr: "" May 14 13:00:27.927: INFO: stdout: "update-demo-nautilus-bbpl5 update-demo-nautilus-wvt6c " May 14 13:00:27.927: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bbpl5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4677' May 14 13:00:28.048: INFO: stderr: "" May 14 13:00:28.048: INFO: stdout: "true" May 14 13:00:28.048: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bbpl5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4677' May 14 13:00:28.234: INFO: stderr: "" May 14 13:00:28.234: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 14 13:00:28.234: INFO: validating pod update-demo-nautilus-bbpl5 May 14 13:00:28.240: INFO: got data: { "image": "nautilus.jpg" } May 14 13:00:28.240: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 14 13:00:28.240: INFO: update-demo-nautilus-bbpl5 is verified up and running May 14 13:00:28.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wvt6c -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4677' May 14 13:00:28.340: INFO: stderr: "" May 14 13:00:28.340: INFO: stdout: "true" May 14 13:00:28.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wvt6c -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4677' May 14 13:00:28.429: INFO: stderr: "" May 14 13:00:28.429: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 14 13:00:28.429: INFO: validating pod update-demo-nautilus-wvt6c May 14 13:00:28.433: INFO: got data: { "image": "nautilus.jpg" } May 14 13:00:28.433: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 14 13:00:28.433: INFO: update-demo-nautilus-wvt6c is verified up and running STEP: using delete to clean up resources May 14 13:00:28.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4677' May 14 13:00:28.553: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 14 13:00:28.553: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 14 13:00:28.554: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4677' May 14 13:00:28.659: INFO: stderr: "No resources found.\n" May 14 13:00:28.659: INFO: stdout: "" May 14 13:00:28.659: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4677 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 14 13:00:28.794: INFO: stderr: "" May 14 13:00:28.794: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:00:28.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4677" for this suite. May 14 13:00:34.820: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:00:34.883: INFO: namespace kubectl-4677 deletion completed in 6.085710341s • [SLOW TEST:12.650 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:00:34.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-4f04ec09-3b4f-41f6-9fef-3322ec0bf94c STEP: Creating a pod to test consume configMaps May 14 13:00:34.946: INFO: Waiting up to 5m0s for pod "pod-configmaps-7432e6aa-b863-492a-ba13-40b454930c88" in namespace "configmap-7900" to be "success or failure" May 14 13:00:34.950: INFO: Pod "pod-configmaps-7432e6aa-b863-492a-ba13-40b454930c88": Phase="Pending", Reason="", readiness=false. Elapsed: 3.814027ms May 14 13:00:36.955: INFO: Pod "pod-configmaps-7432e6aa-b863-492a-ba13-40b454930c88": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008487911s May 14 13:00:38.958: INFO: Pod "pod-configmaps-7432e6aa-b863-492a-ba13-40b454930c88": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01213432s May 14 13:00:41.216: INFO: Pod "pod-configmaps-7432e6aa-b863-492a-ba13-40b454930c88": Phase="Running", Reason="", readiness=true. Elapsed: 6.269324253s May 14 13:00:43.496: INFO: Pod "pod-configmaps-7432e6aa-b863-492a-ba13-40b454930c88": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.549355883s STEP: Saw pod success May 14 13:00:43.496: INFO: Pod "pod-configmaps-7432e6aa-b863-492a-ba13-40b454930c88" satisfied condition "success or failure" May 14 13:00:43.498: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-7432e6aa-b863-492a-ba13-40b454930c88 container configmap-volume-test: STEP: delete the pod May 14 13:00:44.078: INFO: Waiting for pod pod-configmaps-7432e6aa-b863-492a-ba13-40b454930c88 to disappear May 14 13:00:44.209: INFO: Pod pod-configmaps-7432e6aa-b863-492a-ba13-40b454930c88 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:00:44.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7900" for this suite. May 14 13:00:52.301: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:00:52.371: INFO: namespace configmap-7900 deletion completed in 8.157809948s • [SLOW TEST:17.487 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:00:52.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 14 13:00:52.601: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8098,SelfLink:/api/v1/namespaces/watch-8098/configmaps/e2e-watch-test-label-changed,UID:e91fdb4c-c27c-4108-80e4-f9f8ce80eb20,ResourceVersion:10851857,Generation:0,CreationTimestamp:2020-05-14 13:00:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} May 14 13:00:52.602: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8098,SelfLink:/api/v1/namespaces/watch-8098/configmaps/e2e-watch-test-label-changed,UID:e91fdb4c-c27c-4108-80e4-f9f8ce80eb20,ResourceVersion:10851858,Generation:0,CreationTimestamp:2020-05-14 13:00:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 14 13:00:52.602: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8098,SelfLink:/api/v1/namespaces/watch-8098/configmaps/e2e-watch-test-label-changed,UID:e91fdb4c-c27c-4108-80e4-f9f8ce80eb20,ResourceVersion:10851859,Generation:0,CreationTimestamp:2020-05-14 13:00:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 14 13:01:02.648: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8098,SelfLink:/api/v1/namespaces/watch-8098/configmaps/e2e-watch-test-label-changed,UID:e91fdb4c-c27c-4108-80e4-f9f8ce80eb20,ResourceVersion:10851880,Generation:0,CreationTimestamp:2020-05-14 13:00:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 14 13:01:02.648: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8098,SelfLink:/api/v1/namespaces/watch-8098/configmaps/e2e-watch-test-label-changed,UID:e91fdb4c-c27c-4108-80e4-f9f8ce80eb20,ResourceVersion:10851881,Generation:0,CreationTimestamp:2020-05-14 13:00:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} May 14 13:01:02.648: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8098,SelfLink:/api/v1/namespaces/watch-8098/configmaps/e2e-watch-test-label-changed,UID:e91fdb4c-c27c-4108-80e4-f9f8ce80eb20,ResourceVersion:10851882,Generation:0,CreationTimestamp:2020-05-14 13:00:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:01:02.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8098" for this suite. May 14 13:01:08.675: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:01:08.747: INFO: namespace watch-8098 deletion completed in 6.093976155s • [SLOW TEST:16.376 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:01:08.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-928bddca-f1b6-41eb-9f5e-8020f514688c STEP: Creating a pod to test consume secrets May 14 13:01:08.900: INFO: Waiting up to 5m0s for pod "pod-secrets-d99fa1a1-464f-4c68-84ee-66826df9436b" in namespace "secrets-3546" to be "success or failure" May 14 13:01:08.916: INFO: Pod "pod-secrets-d99fa1a1-464f-4c68-84ee-66826df9436b": Phase="Pending", Reason="", readiness=false. Elapsed: 15.911982ms May 14 13:01:10.946: INFO: Pod "pod-secrets-d99fa1a1-464f-4c68-84ee-66826df9436b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045844962s May 14 13:01:12.951: INFO: Pod "pod-secrets-d99fa1a1-464f-4c68-84ee-66826df9436b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050531704s STEP: Saw pod success May 14 13:01:12.951: INFO: Pod "pod-secrets-d99fa1a1-464f-4c68-84ee-66826df9436b" satisfied condition "success or failure" May 14 13:01:12.954: INFO: Trying to get logs from node iruya-worker pod pod-secrets-d99fa1a1-464f-4c68-84ee-66826df9436b container secret-volume-test: STEP: delete the pod May 14 13:01:12.979: INFO: Waiting for pod pod-secrets-d99fa1a1-464f-4c68-84ee-66826df9436b to disappear May 14 13:01:12.988: INFO: Pod pod-secrets-d99fa1a1-464f-4c68-84ee-66826df9436b no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:01:12.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3546" for this suite. May 14 13:01:19.108: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:01:19.236: INFO: namespace secrets-3546 deletion completed in 6.225420227s • [SLOW TEST:10.489 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:01:19.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod test-webserver-a09fb6ac-6b6a-43ff-a2f2-d2db36a018ab in namespace container-probe-5565 May 14 13:01:25.309: INFO: Started pod test-webserver-a09fb6ac-6b6a-43ff-a2f2-d2db36a018ab in namespace container-probe-5565 STEP: checking the pod's current state and verifying that restartCount is present May 14 13:01:25.312: INFO: Initial restart count of pod test-webserver-a09fb6ac-6b6a-43ff-a2f2-d2db36a018ab is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:05:26.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5565" for this suite. May 14 13:05:32.595: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:05:32.688: INFO: namespace container-probe-5565 deletion completed in 6.586918774s • [SLOW TEST:253.451 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:05:32.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars May 14 13:05:32.783: INFO: Waiting up to 5m0s for pod "downward-api-a1c51383-6af1-464f-9fbe-75ba247875a9" in namespace "downward-api-6920" to be "success or failure" May 14 13:05:32.794: INFO: Pod "downward-api-a1c51383-6af1-464f-9fbe-75ba247875a9": Phase="Pending", Reason="", readiness=false. Elapsed: 11.027784ms May 14 13:05:34.799: INFO: Pod "downward-api-a1c51383-6af1-464f-9fbe-75ba247875a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015734143s May 14 13:05:36.807: INFO: Pod "downward-api-a1c51383-6af1-464f-9fbe-75ba247875a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023322369s STEP: Saw pod success May 14 13:05:36.807: INFO: Pod "downward-api-a1c51383-6af1-464f-9fbe-75ba247875a9" satisfied condition "success or failure" May 14 13:05:36.808: INFO: Trying to get logs from node iruya-worker2 pod downward-api-a1c51383-6af1-464f-9fbe-75ba247875a9 container dapi-container: STEP: delete the pod May 14 13:05:36.832: INFO: Waiting for pod downward-api-a1c51383-6af1-464f-9fbe-75ba247875a9 to disappear May 14 13:05:36.968: INFO: Pod downward-api-a1c51383-6af1-464f-9fbe-75ba247875a9 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:05:36.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6920" for this suite. May 14 13:05:42.984: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:05:43.058: INFO: namespace downward-api-6920 deletion completed in 6.085655302s • [SLOW TEST:10.369 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:05:43.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 14 13:05:43.120: INFO: Waiting up to 5m0s for pod "downwardapi-volume-09ca3b1b-2b74-4053-815c-744447c28624" in namespace "downward-api-7979" to be "success or failure" May 14 13:05:43.131: INFO: Pod "downwardapi-volume-09ca3b1b-2b74-4053-815c-744447c28624": Phase="Pending", Reason="", readiness=false. Elapsed: 10.590953ms May 14 13:05:45.135: INFO: Pod "downwardapi-volume-09ca3b1b-2b74-4053-815c-744447c28624": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014805181s May 14 13:05:47.139: INFO: Pod "downwardapi-volume-09ca3b1b-2b74-4053-815c-744447c28624": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018608101s STEP: Saw pod success May 14 13:05:47.139: INFO: Pod "downwardapi-volume-09ca3b1b-2b74-4053-815c-744447c28624" satisfied condition "success or failure" May 14 13:05:47.141: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-09ca3b1b-2b74-4053-815c-744447c28624 container client-container: STEP: delete the pod May 14 13:05:47.162: INFO: Waiting for pod downwardapi-volume-09ca3b1b-2b74-4053-815c-744447c28624 to disappear May 14 13:05:47.166: INFO: Pod downwardapi-volume-09ca3b1b-2b74-4053-815c-744447c28624 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:05:47.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7979" for this suite. May 14 13:05:53.230: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:05:53.320: INFO: namespace downward-api-7979 deletion completed in 6.151224203s • [SLOW TEST:10.262 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:05:53.321: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:05:53.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7568" for this suite. May 14 13:06:15.466: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:06:15.525: INFO: namespace pods-7568 deletion completed in 22.081537871s • [SLOW TEST:22.204 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:06:15.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 14 13:06:15.662: INFO: Creating ReplicaSet my-hostname-basic-5503e06f-ff9d-4c02-ace5-f1514814165b May 14 13:06:15.687: INFO: Pod name my-hostname-basic-5503e06f-ff9d-4c02-ace5-f1514814165b: Found 0 pods out of 1 May 14 13:06:20.692: INFO: Pod name my-hostname-basic-5503e06f-ff9d-4c02-ace5-f1514814165b: Found 1 pods out of 1 May 14 13:06:20.692: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-5503e06f-ff9d-4c02-ace5-f1514814165b" is running May 14 13:06:20.696: INFO: Pod "my-hostname-basic-5503e06f-ff9d-4c02-ace5-f1514814165b-8t62x" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-14 13:06:15 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-14 13:06:19 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-14 13:06:19 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-14 13:06:15 +0000 UTC Reason: Message:}]) May 14 13:06:20.696: INFO: Trying to dial the pod May 14 13:06:25.738: INFO: Controller my-hostname-basic-5503e06f-ff9d-4c02-ace5-f1514814165b: Got expected result from replica 1 [my-hostname-basic-5503e06f-ff9d-4c02-ace5-f1514814165b-8t62x]: "my-hostname-basic-5503e06f-ff9d-4c02-ace5-f1514814165b-8t62x", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:06:25.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-7446" for this suite. May 14 13:06:31.757: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:06:31.854: INFO: namespace replicaset-7446 deletion completed in 6.113266483s • [SLOW TEST:16.329 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:06:31.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-projected-khtr STEP: Creating a pod to test atomic-volume-subpath May 14 13:06:32.008: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-khtr" in namespace "subpath-2092" to be "success or failure" May 14 13:06:32.060: INFO: Pod "pod-subpath-test-projected-khtr": Phase="Pending", Reason="", readiness=false. Elapsed: 51.264804ms May 14 13:06:34.064: INFO: Pod "pod-subpath-test-projected-khtr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055256256s May 14 13:06:36.068: INFO: Pod "pod-subpath-test-projected-khtr": Phase="Running", Reason="", readiness=true. Elapsed: 4.059756389s May 14 13:06:38.078: INFO: Pod "pod-subpath-test-projected-khtr": Phase="Running", Reason="", readiness=true. Elapsed: 6.069355635s May 14 13:06:40.082: INFO: Pod "pod-subpath-test-projected-khtr": Phase="Running", Reason="", readiness=true. Elapsed: 8.073504725s May 14 13:06:42.087: INFO: Pod "pod-subpath-test-projected-khtr": Phase="Running", Reason="", readiness=true. Elapsed: 10.078324302s May 14 13:06:44.091: INFO: Pod "pod-subpath-test-projected-khtr": Phase="Running", Reason="", readiness=true. Elapsed: 12.082654475s May 14 13:06:46.095: INFO: Pod "pod-subpath-test-projected-khtr": Phase="Running", Reason="", readiness=true. Elapsed: 14.086979974s May 14 13:06:48.099: INFO: Pod "pod-subpath-test-projected-khtr": Phase="Running", Reason="", readiness=true. Elapsed: 16.090797382s May 14 13:06:50.103: INFO: Pod "pod-subpath-test-projected-khtr": Phase="Running", Reason="", readiness=true. Elapsed: 18.094370925s May 14 13:06:52.107: INFO: Pod "pod-subpath-test-projected-khtr": Phase="Running", Reason="", readiness=true. Elapsed: 20.098838794s May 14 13:06:54.112: INFO: Pod "pod-subpath-test-projected-khtr": Phase="Running", Reason="", readiness=true. Elapsed: 22.104181887s May 14 13:06:56.168: INFO: Pod "pod-subpath-test-projected-khtr": Phase="Running", Reason="", readiness=true. Elapsed: 24.159251984s May 14 13:06:58.171: INFO: Pod "pod-subpath-test-projected-khtr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.162915731s STEP: Saw pod success May 14 13:06:58.171: INFO: Pod "pod-subpath-test-projected-khtr" satisfied condition "success or failure" May 14 13:06:58.203: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-projected-khtr container test-container-subpath-projected-khtr: STEP: delete the pod May 14 13:06:58.250: INFO: Waiting for pod pod-subpath-test-projected-khtr to disappear May 14 13:06:58.300: INFO: Pod pod-subpath-test-projected-khtr no longer exists STEP: Deleting pod pod-subpath-test-projected-khtr May 14 13:06:58.300: INFO: Deleting pod "pod-subpath-test-projected-khtr" in namespace "subpath-2092" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:06:58.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2092" for this suite. May 14 13:07:04.356: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:07:04.437: INFO: namespace subpath-2092 deletion completed in 6.101637785s • [SLOW TEST:32.582 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:07:04.437: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating secret secrets-5825/secret-test-9f8abab1-71fe-4f6c-ad7f-721e924f50fa STEP: Creating a pod to test consume secrets May 14 13:07:04.552: INFO: Waiting up to 5m0s for pod "pod-configmaps-301cda64-5b0e-4e93-92c0-53d4fdb19828" in namespace "secrets-5825" to be "success or failure" May 14 13:07:04.563: INFO: Pod "pod-configmaps-301cda64-5b0e-4e93-92c0-53d4fdb19828": Phase="Pending", Reason="", readiness=false. Elapsed: 10.951232ms May 14 13:07:06.671: INFO: Pod "pod-configmaps-301cda64-5b0e-4e93-92c0-53d4fdb19828": Phase="Pending", Reason="", readiness=false. Elapsed: 2.119056092s May 14 13:07:08.677: INFO: Pod "pod-configmaps-301cda64-5b0e-4e93-92c0-53d4fdb19828": Phase="Running", Reason="", readiness=true. Elapsed: 4.124970948s May 14 13:07:10.681: INFO: Pod "pod-configmaps-301cda64-5b0e-4e93-92c0-53d4fdb19828": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.129050427s STEP: Saw pod success May 14 13:07:10.681: INFO: Pod "pod-configmaps-301cda64-5b0e-4e93-92c0-53d4fdb19828" satisfied condition "success or failure" May 14 13:07:10.684: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-301cda64-5b0e-4e93-92c0-53d4fdb19828 container env-test: STEP: delete the pod May 14 13:07:10.733: INFO: Waiting for pod pod-configmaps-301cda64-5b0e-4e93-92c0-53d4fdb19828 to disappear May 14 13:07:10.749: INFO: Pod pod-configmaps-301cda64-5b0e-4e93-92c0-53d4fdb19828 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:07:10.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5825" for this suite. May 14 13:07:16.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:07:16.850: INFO: namespace secrets-5825 deletion completed in 6.071114476s • [SLOW TEST:12.413 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:07:16.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-4510 STEP: creating a selector STEP: Creating the service pods in kubernetes May 14 13:07:17.016: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 14 13:07:43.251: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.78:8080/dial?request=hostName&protocol=udp&host=10.244.2.77&port=8081&tries=1'] Namespace:pod-network-test-4510 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 14 13:07:43.251: INFO: >>> kubeConfig: /root/.kube/config I0514 13:07:43.286478 6 log.go:172] (0xc000d50630) (0xc0011dfd60) Create stream I0514 13:07:43.286526 6 log.go:172] (0xc000d50630) (0xc0011dfd60) Stream added, broadcasting: 1 I0514 13:07:43.288780 6 log.go:172] (0xc000d50630) Reply frame received for 1 I0514 13:07:43.288818 6 log.go:172] (0xc000d50630) (0xc0011dfe00) Create stream I0514 13:07:43.288825 6 log.go:172] (0xc000d50630) (0xc0011dfe00) Stream added, broadcasting: 3 I0514 13:07:43.290048 6 log.go:172] (0xc000d50630) Reply frame received for 3 I0514 13:07:43.290108 6 log.go:172] (0xc000d50630) (0xc00223e1e0) Create stream I0514 13:07:43.290134 6 log.go:172] (0xc000d50630) (0xc00223e1e0) Stream added, broadcasting: 5 I0514 13:07:43.291191 6 log.go:172] (0xc000d50630) Reply frame received for 5 I0514 13:07:43.426500 6 log.go:172] (0xc000d50630) Data frame received for 3 I0514 13:07:43.426526 6 log.go:172] (0xc0011dfe00) (3) Data frame handling I0514 13:07:43.426541 6 log.go:172] (0xc0011dfe00) (3) Data frame sent I0514 13:07:43.427591 6 log.go:172] (0xc000d50630) Data frame received for 5 I0514 13:07:43.427640 6 log.go:172] (0xc00223e1e0) (5) Data frame handling I0514 13:07:43.427671 6 log.go:172] (0xc000d50630) Data frame received for 3 I0514 13:07:43.427683 6 log.go:172] (0xc0011dfe00) (3) Data frame handling I0514 13:07:43.429710 6 log.go:172] (0xc000d50630) Data frame received for 1 I0514 13:07:43.429739 6 log.go:172] (0xc0011dfd60) (1) Data frame handling I0514 13:07:43.429757 6 log.go:172] (0xc0011dfd60) (1) Data frame sent I0514 13:07:43.429777 6 log.go:172] (0xc000d50630) (0xc0011dfd60) Stream removed, broadcasting: 1 I0514 13:07:43.429808 6 log.go:172] (0xc000d50630) Go away received I0514 13:07:43.430227 6 log.go:172] (0xc000d50630) (0xc0011dfd60) Stream removed, broadcasting: 1 I0514 13:07:43.430257 6 log.go:172] (0xc000d50630) (0xc0011dfe00) Stream removed, broadcasting: 3 I0514 13:07:43.430278 6 log.go:172] (0xc000d50630) (0xc00223e1e0) Stream removed, broadcasting: 5 May 14 13:07:43.430: INFO: Waiting for endpoints: map[] May 14 13:07:43.433: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.78:8080/dial?request=hostName&protocol=udp&host=10.244.1.30&port=8081&tries=1'] Namespace:pod-network-test-4510 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 14 13:07:43.433: INFO: >>> kubeConfig: /root/.kube/config I0514 13:07:43.458176 6 log.go:172] (0xc002460840) (0xc000531860) Create stream I0514 13:07:43.458213 6 log.go:172] (0xc002460840) (0xc000531860) Stream added, broadcasting: 1 I0514 13:07:43.460469 6 log.go:172] (0xc002460840) Reply frame received for 1 I0514 13:07:43.460507 6 log.go:172] (0xc002460840) (0xc0011dff40) Create stream I0514 13:07:43.460527 6 log.go:172] (0xc002460840) (0xc0011dff40) Stream added, broadcasting: 3 I0514 13:07:43.461729 6 log.go:172] (0xc002460840) Reply frame received for 3 I0514 13:07:43.461786 6 log.go:172] (0xc002460840) (0xc0005319a0) Create stream I0514 13:07:43.461830 6 log.go:172] (0xc002460840) (0xc0005319a0) Stream added, broadcasting: 5 I0514 13:07:43.462919 6 log.go:172] (0xc002460840) Reply frame received for 5 I0514 13:07:43.525028 6 log.go:172] (0xc002460840) Data frame received for 3 I0514 13:07:43.525049 6 log.go:172] (0xc0011dff40) (3) Data frame handling I0514 13:07:43.525061 6 log.go:172] (0xc0011dff40) (3) Data frame sent I0514 13:07:43.525803 6 log.go:172] (0xc002460840) Data frame received for 3 I0514 13:07:43.525824 6 log.go:172] (0xc0011dff40) (3) Data frame handling I0514 13:07:43.525837 6 log.go:172] (0xc002460840) Data frame received for 5 I0514 13:07:43.525848 6 log.go:172] (0xc0005319a0) (5) Data frame handling I0514 13:07:43.527143 6 log.go:172] (0xc002460840) Data frame received for 1 I0514 13:07:43.527164 6 log.go:172] (0xc000531860) (1) Data frame handling I0514 13:07:43.527174 6 log.go:172] (0xc000531860) (1) Data frame sent I0514 13:07:43.527369 6 log.go:172] (0xc002460840) (0xc000531860) Stream removed, broadcasting: 1 I0514 13:07:43.527457 6 log.go:172] (0xc002460840) Go away received I0514 13:07:43.527498 6 log.go:172] (0xc002460840) (0xc000531860) Stream removed, broadcasting: 1 I0514 13:07:43.527547 6 log.go:172] (0xc002460840) (0xc0011dff40) Stream removed, broadcasting: 3 I0514 13:07:43.527582 6 log.go:172] (0xc002460840) (0xc0005319a0) Stream removed, broadcasting: 5 May 14 13:07:43.527: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:07:43.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4510" for this suite. May 14 13:08:05.568: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:08:05.631: INFO: namespace pod-network-test-4510 deletion completed in 22.099294424s • [SLOW TEST:48.780 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:08:05.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs May 14 13:08:05.704: INFO: Waiting up to 5m0s for pod "pod-eb17c75d-250a-4185-8de2-779cb8ee797b" in namespace "emptydir-7417" to be "success or failure" May 14 13:08:05.709: INFO: Pod "pod-eb17c75d-250a-4185-8de2-779cb8ee797b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.14726ms May 14 13:08:07.756: INFO: Pod "pod-eb17c75d-250a-4185-8de2-779cb8ee797b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052246762s May 14 13:08:09.761: INFO: Pod "pod-eb17c75d-250a-4185-8de2-779cb8ee797b": Phase="Running", Reason="", readiness=true. Elapsed: 4.056921818s May 14 13:08:11.765: INFO: Pod "pod-eb17c75d-250a-4185-8de2-779cb8ee797b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.061067012s STEP: Saw pod success May 14 13:08:11.765: INFO: Pod "pod-eb17c75d-250a-4185-8de2-779cb8ee797b" satisfied condition "success or failure" May 14 13:08:11.768: INFO: Trying to get logs from node iruya-worker pod pod-eb17c75d-250a-4185-8de2-779cb8ee797b container test-container: STEP: delete the pod May 14 13:08:11.788: INFO: Waiting for pod pod-eb17c75d-250a-4185-8de2-779cb8ee797b to disappear May 14 13:08:11.792: INFO: Pod pod-eb17c75d-250a-4185-8de2-779cb8ee797b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:08:11.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7417" for this suite. May 14 13:08:17.807: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:08:17.873: INFO: namespace emptydir-7417 deletion completed in 6.077917323s • [SLOW TEST:12.242 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:08:17.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod May 14 13:08:17.982: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:08:24.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8312" for this suite. May 14 13:08:30.155: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:08:30.256: INFO: namespace init-container-8312 deletion completed in 6.13362877s • [SLOW TEST:12.382 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:08:30.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 14 13:08:38.495: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 14 13:08:38.520: INFO: Pod pod-with-prestop-http-hook still exists May 14 13:08:40.520: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 14 13:08:40.524: INFO: Pod pod-with-prestop-http-hook still exists May 14 13:08:42.520: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 14 13:08:42.525: INFO: Pod pod-with-prestop-http-hook still exists May 14 13:08:44.520: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 14 13:08:44.524: INFO: Pod pod-with-prestop-http-hook still exists May 14 13:08:46.520: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 14 13:08:46.524: INFO: Pod pod-with-prestop-http-hook still exists May 14 13:08:48.520: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 14 13:08:48.525: INFO: Pod pod-with-prestop-http-hook still exists May 14 13:08:50.520: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 14 13:08:50.525: INFO: Pod pod-with-prestop-http-hook still exists May 14 13:08:52.520: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 14 13:08:52.525: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:08:52.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9162" for this suite. May 14 13:09:14.551: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:09:14.627: INFO: namespace container-lifecycle-hook-9162 deletion completed in 22.088968752s • [SLOW TEST:44.371 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:09:14.628: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 14 13:09:14.715: INFO: Create a RollingUpdate DaemonSet May 14 13:09:14.719: INFO: Check that daemon pods launch on every node of the cluster May 14 13:09:14.751: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 13:09:14.754: INFO: Number of nodes with available pods: 0 May 14 13:09:14.754: INFO: Node iruya-worker is running more than one daemon pod May 14 13:09:15.759: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 13:09:15.763: INFO: Number of nodes with available pods: 0 May 14 13:09:15.763: INFO: Node iruya-worker is running more than one daemon pod May 14 13:09:16.912: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 13:09:17.117: INFO: Number of nodes with available pods: 0 May 14 13:09:17.117: INFO: Node iruya-worker is running more than one daemon pod May 14 13:09:17.836: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 13:09:17.839: INFO: Number of nodes with available pods: 0 May 14 13:09:17.839: INFO: Node iruya-worker is running more than one daemon pod May 14 13:09:18.758: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 13:09:18.762: INFO: Number of nodes with available pods: 0 May 14 13:09:18.762: INFO: Node iruya-worker is running more than one daemon pod May 14 13:09:19.782: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 13:09:19.786: INFO: Number of nodes with available pods: 2 May 14 13:09:19.786: INFO: Number of running nodes: 2, number of available pods: 2 May 14 13:09:19.786: INFO: Update the DaemonSet to trigger a rollout May 14 13:09:19.792: INFO: Updating DaemonSet daemon-set May 14 13:09:24.837: INFO: Roll back the DaemonSet before rollout is complete May 14 13:09:24.842: INFO: Updating DaemonSet daemon-set May 14 13:09:24.842: INFO: Make sure DaemonSet rollback is complete May 14 13:09:24.848: INFO: Wrong image for pod: daemon-set-xlw2m. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. May 14 13:09:24.848: INFO: Pod daemon-set-xlw2m is not available May 14 13:09:24.867: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 13:09:25.872: INFO: Wrong image for pod: daemon-set-xlw2m. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. May 14 13:09:25.872: INFO: Pod daemon-set-xlw2m is not available May 14 13:09:25.877: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 13:09:26.876: INFO: Pod daemon-set-w7w9p is not available May 14 13:09:26.879: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3400, will wait for the garbage collector to delete the pods May 14 13:09:26.944: INFO: Deleting DaemonSet.extensions daemon-set took: 7.551667ms May 14 13:09:27.244: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.210851ms May 14 13:09:31.247: INFO: Number of nodes with available pods: 0 May 14 13:09:31.248: INFO: Number of running nodes: 0, number of available pods: 0 May 14 13:09:31.254: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3400/daemonsets","resourceVersion":"10853308"},"items":null} May 14 13:09:31.257: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3400/pods","resourceVersion":"10853308"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:09:31.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3400" for this suite. May 14 13:09:37.282: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:09:37.383: INFO: namespace daemonsets-3400 deletion completed in 6.10971557s • [SLOW TEST:22.755 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:09:37.384: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC May 14 13:09:37.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6510' May 14 13:09:40.258: INFO: stderr: "" May 14 13:09:40.258: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. May 14 13:09:41.263: INFO: Selector matched 1 pods for map[app:redis] May 14 13:09:41.263: INFO: Found 0 / 1 May 14 13:09:42.263: INFO: Selector matched 1 pods for map[app:redis] May 14 13:09:42.263: INFO: Found 0 / 1 May 14 13:09:43.262: INFO: Selector matched 1 pods for map[app:redis] May 14 13:09:43.262: INFO: Found 0 / 1 May 14 13:09:44.262: INFO: Selector matched 1 pods for map[app:redis] May 14 13:09:44.262: INFO: Found 1 / 1 May 14 13:09:44.262: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 14 13:09:44.266: INFO: Selector matched 1 pods for map[app:redis] May 14 13:09:44.266: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 14 13:09:44.266: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-k269q --namespace=kubectl-6510 -p {"metadata":{"annotations":{"x":"y"}}}' May 14 13:09:44.373: INFO: stderr: "" May 14 13:09:44.373: INFO: stdout: "pod/redis-master-k269q patched\n" STEP: checking annotations May 14 13:09:44.376: INFO: Selector matched 1 pods for map[app:redis] May 14 13:09:44.376: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:09:44.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6510" for this suite. May 14 13:10:06.395: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:10:06.478: INFO: namespace kubectl-6510 deletion completed in 22.099678629s • [SLOW TEST:29.095 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:10:06.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 14 13:10:10.616: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:10:10.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9194" for this suite. May 14 13:10:16.715: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:10:16.806: INFO: namespace container-runtime-9194 deletion completed in 6.106196314s • [SLOW TEST:10.328 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:10:16.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name secret-emptykey-test-e303becb-2c72-4ff0-a8ef-675e6349f096 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:10:16.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-644" for this suite. May 14 13:10:22.912: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:10:22.998: INFO: namespace secrets-644 deletion completed in 6.096090351s • [SLOW TEST:6.191 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:10:22.998: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 14 13:10:23.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-8021' May 14 13:10:23.147: INFO: stderr: "" May 14 13:10:23.147: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690 May 14 13:10:23.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-8021' May 14 13:10:32.190: INFO: stderr: "" May 14 13:10:32.190: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:10:32.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8021" for this suite. May 14 13:10:38.219: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:10:38.300: INFO: namespace kubectl-8021 deletion completed in 6.103630639s • [SLOW TEST:15.301 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:10:38.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium May 14 13:10:38.460: INFO: Waiting up to 5m0s for pod "pod-c8c3b5c3-fdff-41be-a0b5-19c3409993a4" in namespace "emptydir-7304" to be "success or failure" May 14 13:10:38.501: INFO: Pod "pod-c8c3b5c3-fdff-41be-a0b5-19c3409993a4": Phase="Pending", Reason="", readiness=false. Elapsed: 40.558336ms May 14 13:10:40.687: INFO: Pod "pod-c8c3b5c3-fdff-41be-a0b5-19c3409993a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.226752215s May 14 13:10:42.692: INFO: Pod "pod-c8c3b5c3-fdff-41be-a0b5-19c3409993a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.231494636s STEP: Saw pod success May 14 13:10:42.692: INFO: Pod "pod-c8c3b5c3-fdff-41be-a0b5-19c3409993a4" satisfied condition "success or failure" May 14 13:10:42.695: INFO: Trying to get logs from node iruya-worker2 pod pod-c8c3b5c3-fdff-41be-a0b5-19c3409993a4 container test-container: STEP: delete the pod May 14 13:10:42.732: INFO: Waiting for pod pod-c8c3b5c3-fdff-41be-a0b5-19c3409993a4 to disappear May 14 13:10:42.748: INFO: Pod pod-c8c3b5c3-fdff-41be-a0b5-19c3409993a4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:10:42.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7304" for this suite. May 14 13:10:48.793: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:10:48.849: INFO: namespace emptydir-7304 deletion completed in 6.078336575s • [SLOW TEST:10.549 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:10:48.850: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 14 13:10:53.530: INFO: Successfully updated pod "pod-update-activedeadlineseconds-4d892f4f-b50c-4fbd-907f-795712fbf1b0" May 14 13:10:53.530: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-4d892f4f-b50c-4fbd-907f-795712fbf1b0" in namespace "pods-5993" to be "terminated due to deadline exceeded" May 14 13:10:53.533: INFO: Pod "pod-update-activedeadlineseconds-4d892f4f-b50c-4fbd-907f-795712fbf1b0": Phase="Running", Reason="", readiness=true. Elapsed: 3.141643ms May 14 13:10:55.537: INFO: Pod "pod-update-activedeadlineseconds-4d892f4f-b50c-4fbd-907f-795712fbf1b0": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.007330388s May 14 13:10:55.537: INFO: Pod "pod-update-activedeadlineseconds-4d892f4f-b50c-4fbd-907f-795712fbf1b0" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:10:55.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5993" for this suite. May 14 13:11:01.581: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:11:01.646: INFO: namespace pods-5993 deletion completed in 6.103633963s • [SLOW TEST:12.796 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:11:01.646: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-8895/configmap-test-f602274f-f8c6-4e6a-a514-3a9b34e87757 STEP: Creating a pod to test consume configMaps May 14 13:11:01.727: INFO: Waiting up to 5m0s for pod "pod-configmaps-2d6115ca-c182-4d7d-bfd5-af80d3985c2d" in namespace "configmap-8895" to be "success or failure" May 14 13:11:01.747: INFO: Pod "pod-configmaps-2d6115ca-c182-4d7d-bfd5-af80d3985c2d": Phase="Pending", Reason="", readiness=false. Elapsed: 19.92261ms May 14 13:11:03.957: INFO: Pod "pod-configmaps-2d6115ca-c182-4d7d-bfd5-af80d3985c2d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.229694043s May 14 13:11:05.961: INFO: Pod "pod-configmaps-2d6115ca-c182-4d7d-bfd5-af80d3985c2d": Phase="Running", Reason="", readiness=true. Elapsed: 4.234055968s May 14 13:11:07.966: INFO: Pod "pod-configmaps-2d6115ca-c182-4d7d-bfd5-af80d3985c2d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.238571371s STEP: Saw pod success May 14 13:11:07.966: INFO: Pod "pod-configmaps-2d6115ca-c182-4d7d-bfd5-af80d3985c2d" satisfied condition "success or failure" May 14 13:11:07.969: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-2d6115ca-c182-4d7d-bfd5-af80d3985c2d container env-test: STEP: delete the pod May 14 13:11:07.993: INFO: Waiting for pod pod-configmaps-2d6115ca-c182-4d7d-bfd5-af80d3985c2d to disappear May 14 13:11:07.998: INFO: Pod pod-configmaps-2d6115ca-c182-4d7d-bfd5-af80d3985c2d no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:11:07.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8895" for this suite. May 14 13:11:14.070: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:11:14.126: INFO: namespace configmap-8895 deletion completed in 6.125645746s • [SLOW TEST:12.480 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:11:14.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod May 14 13:11:14.161: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:11:22.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4367" for this suite. May 14 13:11:28.430: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:11:28.486: INFO: namespace init-container-4367 deletion completed in 6.066111154s • [SLOW TEST:14.360 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:11:28.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-ea03a037-acb3-41a0-94b4-6b5f2a25d1db STEP: Creating a pod to test consume secrets May 14 13:11:28.543: INFO: Waiting up to 5m0s for pod "pod-secrets-f63612e3-37f7-4267-809d-e5668d715323" in namespace "secrets-7886" to be "success or failure" May 14 13:11:28.578: INFO: Pod "pod-secrets-f63612e3-37f7-4267-809d-e5668d715323": Phase="Pending", Reason="", readiness=false. Elapsed: 35.222609ms May 14 13:11:30.921: INFO: Pod "pod-secrets-f63612e3-37f7-4267-809d-e5668d715323": Phase="Pending", Reason="", readiness=false. Elapsed: 2.377792107s May 14 13:11:32.926: INFO: Pod "pod-secrets-f63612e3-37f7-4267-809d-e5668d715323": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.382907858s STEP: Saw pod success May 14 13:11:32.926: INFO: Pod "pod-secrets-f63612e3-37f7-4267-809d-e5668d715323" satisfied condition "success or failure" May 14 13:11:32.928: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-f63612e3-37f7-4267-809d-e5668d715323 container secret-volume-test: STEP: delete the pod May 14 13:11:32.979: INFO: Waiting for pod pod-secrets-f63612e3-37f7-4267-809d-e5668d715323 to disappear May 14 13:11:33.010: INFO: Pod pod-secrets-f63612e3-37f7-4267-809d-e5668d715323 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:11:33.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7886" for this suite. May 14 13:11:39.058: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:11:39.131: INFO: namespace secrets-7886 deletion completed in 6.117472166s • [SLOW TEST:10.644 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:11:39.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test hostPath mode May 14 13:11:39.302: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-1458" to be "success or failure" May 14 13:11:39.322: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 20.400026ms May 14 13:11:41.325: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023556664s May 14 13:11:43.330: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027988588s May 14 13:11:45.333: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.031582008s STEP: Saw pod success May 14 13:11:45.333: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" May 14 13:11:45.336: INFO: Trying to get logs from node iruya-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod May 14 13:11:45.377: INFO: Waiting for pod pod-host-path-test to disappear May 14 13:11:45.382: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:11:45.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-1458" for this suite. May 14 13:11:51.410: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:11:51.479: INFO: namespace hostpath-1458 deletion completed in 6.094364166s • [SLOW TEST:12.348 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:11:51.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 14 13:11:55.642: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:11:56.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6494" for this suite. May 14 13:12:02.135: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:12:02.233: INFO: namespace container-runtime-6494 deletion completed in 6.192755967s • [SLOW TEST:10.753 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:12:02.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-effb312b-a90d-4047-baa5-d40379d96dd3 STEP: Creating a pod to test consume secrets May 14 13:12:02.339: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-af9ef83a-4f13-48b4-af3c-e2f5cba174d9" in namespace "projected-3361" to be "success or failure" May 14 13:12:02.370: INFO: Pod "pod-projected-secrets-af9ef83a-4f13-48b4-af3c-e2f5cba174d9": Phase="Pending", Reason="", readiness=false. Elapsed: 30.689211ms May 14 13:12:04.373: INFO: Pod "pod-projected-secrets-af9ef83a-4f13-48b4-af3c-e2f5cba174d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034190939s May 14 13:12:07.527: INFO: Pod "pod-projected-secrets-af9ef83a-4f13-48b4-af3c-e2f5cba174d9": Phase="Running", Reason="", readiness=true. Elapsed: 5.188056858s May 14 13:12:09.531: INFO: Pod "pod-projected-secrets-af9ef83a-4f13-48b4-af3c-e2f5cba174d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.192086499s STEP: Saw pod success May 14 13:12:09.531: INFO: Pod "pod-projected-secrets-af9ef83a-4f13-48b4-af3c-e2f5cba174d9" satisfied condition "success or failure" May 14 13:12:09.533: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-af9ef83a-4f13-48b4-af3c-e2f5cba174d9 container projected-secret-volume-test: STEP: delete the pod May 14 13:12:09.566: INFO: Waiting for pod pod-projected-secrets-af9ef83a-4f13-48b4-af3c-e2f5cba174d9 to disappear May 14 13:12:09.657: INFO: Pod pod-projected-secrets-af9ef83a-4f13-48b4-af3c-e2f5cba174d9 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:12:09.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3361" for this suite. May 14 13:12:15.777: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:12:15.846: INFO: namespace projected-3361 deletion completed in 6.185509661s • [SLOW TEST:13.612 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:12:15.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 14 13:12:15.926: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-4589' May 14 13:12:16.053: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 14 13:12:16.053: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created May 14 13:12:16.078: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller May 14 13:12:16.113: INFO: scanned /root for discovery docs: May 14 13:12:16.113: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-4589' May 14 13:12:33.251: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 14 13:12:33.251: INFO: stdout: "Created e2e-test-nginx-rc-e23712b4142612438fb71b5abe1647a6\nScaling up e2e-test-nginx-rc-e23712b4142612438fb71b5abe1647a6 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-e23712b4142612438fb71b5abe1647a6 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-e23712b4142612438fb71b5abe1647a6 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" May 14 13:12:33.251: INFO: stdout: "Created e2e-test-nginx-rc-e23712b4142612438fb71b5abe1647a6\nScaling up e2e-test-nginx-rc-e23712b4142612438fb71b5abe1647a6 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-e23712b4142612438fb71b5abe1647a6 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-e23712b4142612438fb71b5abe1647a6 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. May 14 13:12:33.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-4589' May 14 13:12:33.340: INFO: stderr: "" May 14 13:12:33.340: INFO: stdout: "e2e-test-nginx-rc-e23712b4142612438fb71b5abe1647a6-v5ssq " May 14 13:12:33.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-e23712b4142612438fb71b5abe1647a6-v5ssq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4589' May 14 13:12:33.429: INFO: stderr: "" May 14 13:12:33.429: INFO: stdout: "true" May 14 13:12:33.429: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-e23712b4142612438fb71b5abe1647a6-v5ssq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4589' May 14 13:12:33.521: INFO: stderr: "" May 14 13:12:33.521: INFO: stdout: "docker.io/library/nginx:1.14-alpine" May 14 13:12:33.521: INFO: e2e-test-nginx-rc-e23712b4142612438fb71b5abe1647a6-v5ssq is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522 May 14 13:12:33.521: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-4589' May 14 13:12:33.632: INFO: stderr: "" May 14 13:12:33.632: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:12:33.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4589" for this suite. May 14 13:12:39.650: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:12:39.725: INFO: namespace kubectl-4589 deletion completed in 6.089934451s • [SLOW TEST:23.879 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:12:39.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 14 13:12:39.778: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 14 13:12:39.787: INFO: Pod name sample-pod: Found 0 pods out of 1 May 14 13:12:44.791: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 14 13:12:44.791: INFO: Creating deployment "test-rolling-update-deployment" May 14 13:12:44.795: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 14 13:12:44.802: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 14 13:12:46.808: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 14 13:12:46.811: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725058764, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725058764, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725058764, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725058764, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} May 14 13:12:48.844: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725058764, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725058764, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725058768, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725058764, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} May 14 13:12:50.815: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 May 14 13:12:50.825: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-290,SelfLink:/apis/apps/v1/namespaces/deployment-290/deployments/test-rolling-update-deployment,UID:24180fab-396d-4de3-aac0-5647ed3afe93,ResourceVersion:10854164,Generation:1,CreationTimestamp:2020-05-14 13:12:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-05-14 13:12:44 +0000 UTC 2020-05-14 13:12:44 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-05-14 13:12:48 +0000 UTC 2020-05-14 13:12:44 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} May 14 13:12:50.829: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-290,SelfLink:/apis/apps/v1/namespaces/deployment-290/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:165c8ed7-b91c-4932-9f43-1f7135e1de02,ResourceVersion:10854153,Generation:1,CreationTimestamp:2020-05-14 13:12:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 24180fab-396d-4de3-aac0-5647ed3afe93 0xc0031f78d7 0xc0031f78d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 14 13:12:50.829: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 14 13:12:50.829: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-290,SelfLink:/apis/apps/v1/namespaces/deployment-290/replicasets/test-rolling-update-controller,UID:0c15ed58-c997-4af4-9f17-0c6d6a207cda,ResourceVersion:10854162,Generation:2,CreationTimestamp:2020-05-14 13:12:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 24180fab-396d-4de3-aac0-5647ed3afe93 0xc0031f7807 0xc0031f7808}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 14 13:12:50.832: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-w66sg" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-w66sg,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-290,SelfLink:/api/v1/namespaces/deployment-290/pods/test-rolling-update-deployment-79f6b9d75c-w66sg,UID:7f43d5df-e83d-4fe8-92cf-7474f4afeef9,ResourceVersion:10854152,Generation:0,CreationTimestamp:2020-05-14 13:12:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c 165c8ed7-b91c-4932-9f43-1f7135e1de02 0xc002b77327 0xc002b77328}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rncnc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rncnc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-rncnc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b773a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b773c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 13:12:44 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 13:12:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 13:12:48 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 13:12:44 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.91,StartTime:2020-05-14 13:12:44 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-05-14 13:12:48 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://7d6a035252c91b52c99ead8e416f55c325f83c994e241a527b0b9584370bad0b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:12:50.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-290" for this suite. May 14 13:12:56.872: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:12:56.948: INFO: namespace deployment-290 deletion completed in 6.113788239s • [SLOW TEST:17.223 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:12:56.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 14 13:12:57.407: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-5790,SelfLink:/api/v1/namespaces/watch-5790/configmaps/e2e-watch-test-configmap-a,UID:7510ffcd-6aa8-453f-bf1b-bf40d02a3ad2,ResourceVersion:10854207,Generation:0,CreationTimestamp:2020-05-14 13:12:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} May 14 13:12:57.407: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-5790,SelfLink:/api/v1/namespaces/watch-5790/configmaps/e2e-watch-test-configmap-a,UID:7510ffcd-6aa8-453f-bf1b-bf40d02a3ad2,ResourceVersion:10854207,Generation:0,CreationTimestamp:2020-05-14 13:12:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 14 13:13:07.414: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-5790,SelfLink:/api/v1/namespaces/watch-5790/configmaps/e2e-watch-test-configmap-a,UID:7510ffcd-6aa8-453f-bf1b-bf40d02a3ad2,ResourceVersion:10854227,Generation:0,CreationTimestamp:2020-05-14 13:12:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 14 13:13:07.414: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-5790,SelfLink:/api/v1/namespaces/watch-5790/configmaps/e2e-watch-test-configmap-a,UID:7510ffcd-6aa8-453f-bf1b-bf40d02a3ad2,ResourceVersion:10854227,Generation:0,CreationTimestamp:2020-05-14 13:12:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 14 13:13:17.422: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-5790,SelfLink:/api/v1/namespaces/watch-5790/configmaps/e2e-watch-test-configmap-a,UID:7510ffcd-6aa8-453f-bf1b-bf40d02a3ad2,ResourceVersion:10854248,Generation:0,CreationTimestamp:2020-05-14 13:12:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 14 13:13:17.422: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-5790,SelfLink:/api/v1/namespaces/watch-5790/configmaps/e2e-watch-test-configmap-a,UID:7510ffcd-6aa8-453f-bf1b-bf40d02a3ad2,ResourceVersion:10854248,Generation:0,CreationTimestamp:2020-05-14 13:12:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 14 13:13:27.429: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-5790,SelfLink:/api/v1/namespaces/watch-5790/configmaps/e2e-watch-test-configmap-a,UID:7510ffcd-6aa8-453f-bf1b-bf40d02a3ad2,ResourceVersion:10854268,Generation:0,CreationTimestamp:2020-05-14 13:12:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 14 13:13:27.430: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-5790,SelfLink:/api/v1/namespaces/watch-5790/configmaps/e2e-watch-test-configmap-a,UID:7510ffcd-6aa8-453f-bf1b-bf40d02a3ad2,ResourceVersion:10854268,Generation:0,CreationTimestamp:2020-05-14 13:12:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 14 13:13:37.437: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-5790,SelfLink:/api/v1/namespaces/watch-5790/configmaps/e2e-watch-test-configmap-b,UID:ed113c11-21c3-4a9b-a081-8adf057ef66f,ResourceVersion:10854290,Generation:0,CreationTimestamp:2020-05-14 13:13:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} May 14 13:13:37.437: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-5790,SelfLink:/api/v1/namespaces/watch-5790/configmaps/e2e-watch-test-configmap-b,UID:ed113c11-21c3-4a9b-a081-8adf057ef66f,ResourceVersion:10854290,Generation:0,CreationTimestamp:2020-05-14 13:13:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 14 13:13:47.443: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-5790,SelfLink:/api/v1/namespaces/watch-5790/configmaps/e2e-watch-test-configmap-b,UID:ed113c11-21c3-4a9b-a081-8adf057ef66f,ResourceVersion:10854310,Generation:0,CreationTimestamp:2020-05-14 13:13:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} May 14 13:13:47.443: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-5790,SelfLink:/api/v1/namespaces/watch-5790/configmaps/e2e-watch-test-configmap-b,UID:ed113c11-21c3-4a9b-a081-8adf057ef66f,ResourceVersion:10854310,Generation:0,CreationTimestamp:2020-05-14 13:13:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:13:57.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5790" for this suite. May 14 13:14:03.567: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:14:03.649: INFO: namespace watch-5790 deletion completed in 6.202125271s • [SLOW TEST:66.701 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:14:03.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-secret-c758 STEP: Creating a pod to test atomic-volume-subpath May 14 13:14:03.740: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-c758" in namespace "subpath-9279" to be "success or failure" May 14 13:14:03.743: INFO: Pod "pod-subpath-test-secret-c758": Phase="Pending", Reason="", readiness=false. Elapsed: 3.225327ms May 14 13:14:05.748: INFO: Pod "pod-subpath-test-secret-c758": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007774083s May 14 13:14:07.752: INFO: Pod "pod-subpath-test-secret-c758": Phase="Running", Reason="", readiness=true. Elapsed: 4.011506664s May 14 13:14:09.756: INFO: Pod "pod-subpath-test-secret-c758": Phase="Running", Reason="", readiness=true. Elapsed: 6.015861741s May 14 13:14:11.760: INFO: Pod "pod-subpath-test-secret-c758": Phase="Running", Reason="", readiness=true. Elapsed: 8.019838489s May 14 13:14:13.765: INFO: Pod "pod-subpath-test-secret-c758": Phase="Running", Reason="", readiness=true. Elapsed: 10.024325609s May 14 13:14:15.769: INFO: Pod "pod-subpath-test-secret-c758": Phase="Running", Reason="", readiness=true. Elapsed: 12.02834387s May 14 13:14:17.772: INFO: Pod "pod-subpath-test-secret-c758": Phase="Running", Reason="", readiness=true. Elapsed: 14.032220141s May 14 13:14:19.777: INFO: Pod "pod-subpath-test-secret-c758": Phase="Running", Reason="", readiness=true. Elapsed: 16.037111624s May 14 13:14:21.782: INFO: Pod "pod-subpath-test-secret-c758": Phase="Running", Reason="", readiness=true. Elapsed: 18.041737578s May 14 13:14:23.786: INFO: Pod "pod-subpath-test-secret-c758": Phase="Running", Reason="", readiness=true. Elapsed: 20.045431849s May 14 13:14:25.790: INFO: Pod "pod-subpath-test-secret-c758": Phase="Running", Reason="", readiness=true. Elapsed: 22.050134042s May 14 13:14:27.794: INFO: Pod "pod-subpath-test-secret-c758": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.054054511s STEP: Saw pod success May 14 13:14:27.794: INFO: Pod "pod-subpath-test-secret-c758" satisfied condition "success or failure" May 14 13:14:27.796: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-secret-c758 container test-container-subpath-secret-c758: STEP: delete the pod May 14 13:14:27.837: INFO: Waiting for pod pod-subpath-test-secret-c758 to disappear May 14 13:14:27.989: INFO: Pod pod-subpath-test-secret-c758 no longer exists STEP: Deleting pod pod-subpath-test-secret-c758 May 14 13:14:27.989: INFO: Deleting pod "pod-subpath-test-secret-c758" in namespace "subpath-9279" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:14:27.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9279" for this suite. May 14 13:14:34.027: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:14:34.097: INFO: namespace subpath-9279 deletion completed in 6.097658484s • [SLOW TEST:30.448 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:14:34.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:14:34.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3744" for this suite. May 14 13:14:40.290: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:14:40.377: INFO: namespace kubelet-test-3744 deletion completed in 6.096265379s • [SLOW TEST:6.280 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:14:40.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-1628 STEP: creating a selector STEP: Creating the service pods in kubernetes May 14 13:14:40.475: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 14 13:15:10.635: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.92 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1628 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 14 13:15:10.635: INFO: >>> kubeConfig: /root/.kube/config I0514 13:15:10.668532 6 log.go:172] (0xc0009d9760) (0xc000216820) Create stream I0514 13:15:10.668563 6 log.go:172] (0xc0009d9760) (0xc000216820) Stream added, broadcasting: 1 I0514 13:15:10.671547 6 log.go:172] (0xc0009d9760) Reply frame received for 1 I0514 13:15:10.671604 6 log.go:172] (0xc0009d9760) (0xc000216960) Create stream I0514 13:15:10.671621 6 log.go:172] (0xc0009d9760) (0xc000216960) Stream added, broadcasting: 3 I0514 13:15:10.672635 6 log.go:172] (0xc0009d9760) Reply frame received for 3 I0514 13:15:10.672683 6 log.go:172] (0xc0009d9760) (0xc001112000) Create stream I0514 13:15:10.672695 6 log.go:172] (0xc0009d9760) (0xc001112000) Stream added, broadcasting: 5 I0514 13:15:10.673883 6 log.go:172] (0xc0009d9760) Reply frame received for 5 I0514 13:15:11.779965 6 log.go:172] (0xc0009d9760) Data frame received for 5 I0514 13:15:11.780015 6 log.go:172] (0xc001112000) (5) Data frame handling I0514 13:15:11.780063 6 log.go:172] (0xc0009d9760) Data frame received for 3 I0514 13:15:11.780079 6 log.go:172] (0xc000216960) (3) Data frame handling I0514 13:15:11.780094 6 log.go:172] (0xc000216960) (3) Data frame sent I0514 13:15:11.780102 6 log.go:172] (0xc0009d9760) Data frame received for 3 I0514 13:15:11.780119 6 log.go:172] (0xc000216960) (3) Data frame handling I0514 13:15:11.782207 6 log.go:172] (0xc0009d9760) Data frame received for 1 I0514 13:15:11.782228 6 log.go:172] (0xc000216820) (1) Data frame handling I0514 13:15:11.782285 6 log.go:172] (0xc000216820) (1) Data frame sent I0514 13:15:11.782312 6 log.go:172] (0xc0009d9760) (0xc000216820) Stream removed, broadcasting: 1 I0514 13:15:11.782333 6 log.go:172] (0xc0009d9760) Go away received I0514 13:15:11.782499 6 log.go:172] (0xc0009d9760) (0xc000216820) Stream removed, broadcasting: 1 I0514 13:15:11.782540 6 log.go:172] (0xc0009d9760) (0xc000216960) Stream removed, broadcasting: 3 I0514 13:15:11.782554 6 log.go:172] (0xc0009d9760) (0xc001112000) Stream removed, broadcasting: 5 May 14 13:15:11.782: INFO: Found all expected endpoints: [netserver-0] May 14 13:15:11.785: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.42 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1628 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 14 13:15:11.785: INFO: >>> kubeConfig: /root/.kube/config I0514 13:15:11.812630 6 log.go:172] (0xc000cf0420) (0xc000216c80) Create stream I0514 13:15:11.812654 6 log.go:172] (0xc000cf0420) (0xc000216c80) Stream added, broadcasting: 1 I0514 13:15:11.815647 6 log.go:172] (0xc000cf0420) Reply frame received for 1 I0514 13:15:11.815682 6 log.go:172] (0xc000cf0420) (0xc0012e6000) Create stream I0514 13:15:11.815692 6 log.go:172] (0xc000cf0420) (0xc0012e6000) Stream added, broadcasting: 3 I0514 13:15:11.816459 6 log.go:172] (0xc000cf0420) Reply frame received for 3 I0514 13:15:11.816493 6 log.go:172] (0xc000cf0420) (0xc000216d20) Create stream I0514 13:15:11.816504 6 log.go:172] (0xc000cf0420) (0xc000216d20) Stream added, broadcasting: 5 I0514 13:15:11.817450 6 log.go:172] (0xc000cf0420) Reply frame received for 5 I0514 13:15:12.897723 6 log.go:172] (0xc000cf0420) Data frame received for 3 I0514 13:15:12.897745 6 log.go:172] (0xc0012e6000) (3) Data frame handling I0514 13:15:12.897770 6 log.go:172] (0xc0012e6000) (3) Data frame sent I0514 13:15:12.897787 6 log.go:172] (0xc000cf0420) Data frame received for 3 I0514 13:15:12.897795 6 log.go:172] (0xc0012e6000) (3) Data frame handling I0514 13:15:12.897812 6 log.go:172] (0xc000cf0420) Data frame received for 5 I0514 13:15:12.897819 6 log.go:172] (0xc000216d20) (5) Data frame handling I0514 13:15:12.899014 6 log.go:172] (0xc000cf0420) Data frame received for 1 I0514 13:15:12.899035 6 log.go:172] (0xc000216c80) (1) Data frame handling I0514 13:15:12.899049 6 log.go:172] (0xc000216c80) (1) Data frame sent I0514 13:15:12.899232 6 log.go:172] (0xc000cf0420) (0xc000216c80) Stream removed, broadcasting: 1 I0514 13:15:12.899323 6 log.go:172] (0xc000cf0420) (0xc000216c80) Stream removed, broadcasting: 1 I0514 13:15:12.899337 6 log.go:172] (0xc000cf0420) (0xc0012e6000) Stream removed, broadcasting: 3 I0514 13:15:12.899388 6 log.go:172] (0xc000cf0420) Go away received I0514 13:15:12.899450 6 log.go:172] (0xc000cf0420) (0xc000216d20) Stream removed, broadcasting: 5 May 14 13:15:12.899: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:15:12.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1628" for this suite. May 14 13:15:34.954: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:15:35.026: INFO: namespace pod-network-test-1628 deletion completed in 22.121702939s • [SLOW TEST:54.648 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:15:35.026: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 14 13:15:39.272: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:15:39.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1228" for this suite. May 14 13:15:45.390: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:15:45.474: INFO: namespace container-runtime-1228 deletion completed in 6.096268931s • [SLOW TEST:10.447 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:15:45.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-e3d23ded-585e-46f7-8590-0d5e94db82bc STEP: Creating a pod to test consume configMaps May 14 13:15:45.562: INFO: Waiting up to 5m0s for pod "pod-configmaps-4d3604ff-573b-499f-81cc-c514aaf63438" in namespace "configmap-4673" to be "success or failure" May 14 13:15:45.603: INFO: Pod "pod-configmaps-4d3604ff-573b-499f-81cc-c514aaf63438": Phase="Pending", Reason="", readiness=false. Elapsed: 41.258828ms May 14 13:15:47.607: INFO: Pod "pod-configmaps-4d3604ff-573b-499f-81cc-c514aaf63438": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045232013s May 14 13:15:49.610: INFO: Pod "pod-configmaps-4d3604ff-573b-499f-81cc-c514aaf63438": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048531499s STEP: Saw pod success May 14 13:15:49.610: INFO: Pod "pod-configmaps-4d3604ff-573b-499f-81cc-c514aaf63438" satisfied condition "success or failure" May 14 13:15:49.612: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-4d3604ff-573b-499f-81cc-c514aaf63438 container configmap-volume-test: STEP: delete the pod May 14 13:15:49.686: INFO: Waiting for pod pod-configmaps-4d3604ff-573b-499f-81cc-c514aaf63438 to disappear May 14 13:15:49.865: INFO: Pod pod-configmaps-4d3604ff-573b-499f-81cc-c514aaf63438 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:15:49.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4673" for this suite. May 14 13:15:55.929: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:15:56.035: INFO: namespace configmap-4673 deletion completed in 6.165247624s • [SLOW TEST:10.561 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:15:56.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars May 14 13:15:56.101: INFO: Waiting up to 5m0s for pod "downward-api-bdcfa53b-4195-4d7b-a4a3-21f22cfc4784" in namespace "downward-api-5297" to be "success or failure" May 14 13:15:56.105: INFO: Pod "downward-api-bdcfa53b-4195-4d7b-a4a3-21f22cfc4784": Phase="Pending", Reason="", readiness=false. Elapsed: 3.820766ms May 14 13:15:58.108: INFO: Pod "downward-api-bdcfa53b-4195-4d7b-a4a3-21f22cfc4784": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007330666s May 14 13:16:00.140: INFO: Pod "downward-api-bdcfa53b-4195-4d7b-a4a3-21f22cfc4784": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03889707s STEP: Saw pod success May 14 13:16:00.140: INFO: Pod "downward-api-bdcfa53b-4195-4d7b-a4a3-21f22cfc4784" satisfied condition "success or failure" May 14 13:16:00.143: INFO: Trying to get logs from node iruya-worker2 pod downward-api-bdcfa53b-4195-4d7b-a4a3-21f22cfc4784 container dapi-container: STEP: delete the pod May 14 13:16:00.236: INFO: Waiting for pod downward-api-bdcfa53b-4195-4d7b-a4a3-21f22cfc4784 to disappear May 14 13:16:00.320: INFO: Pod downward-api-bdcfa53b-4195-4d7b-a4a3-21f22cfc4784 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:16:00.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5297" for this suite. May 14 13:16:06.346: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:16:06.425: INFO: namespace downward-api-5297 deletion completed in 6.101372057s • [SLOW TEST:10.389 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:16:06.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: executing a command with run --rm and attach with stdin May 14 13:16:06.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5578 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' May 14 13:16:10.679: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0514 13:16:10.599653 995 log.go:172] (0xc0009762c0) (0xc0003fe320) Create stream\nI0514 13:16:10.599721 995 log.go:172] (0xc0009762c0) (0xc0003fe320) Stream added, broadcasting: 1\nI0514 13:16:10.602534 995 log.go:172] (0xc0009762c0) Reply frame received for 1\nI0514 13:16:10.602575 995 log.go:172] (0xc0009762c0) (0xc0003fe3c0) Create stream\nI0514 13:16:10.602586 995 log.go:172] (0xc0009762c0) (0xc0003fe3c0) Stream added, broadcasting: 3\nI0514 13:16:10.603624 995 log.go:172] (0xc0009762c0) Reply frame received for 3\nI0514 13:16:10.603663 995 log.go:172] (0xc0009762c0) (0xc0003fe460) Create stream\nI0514 13:16:10.603673 995 log.go:172] (0xc0009762c0) (0xc0003fe460) Stream added, broadcasting: 5\nI0514 13:16:10.604718 995 log.go:172] (0xc0009762c0) Reply frame received for 5\nI0514 13:16:10.604755 995 log.go:172] (0xc0009762c0) (0xc0002b2000) Create stream\nI0514 13:16:10.604766 995 log.go:172] (0xc0009762c0) (0xc0002b2000) Stream added, broadcasting: 7\nI0514 13:16:10.606040 995 log.go:172] (0xc0009762c0) Reply frame received for 7\nI0514 13:16:10.606249 995 log.go:172] (0xc0003fe3c0) (3) Writing data frame\nI0514 13:16:10.606396 995 log.go:172] (0xc0003fe3c0) (3) Writing data frame\nI0514 13:16:10.607486 995 log.go:172] (0xc0009762c0) Data frame received for 5\nI0514 13:16:10.607509 995 log.go:172] (0xc0003fe460) (5) Data frame handling\nI0514 13:16:10.607540 995 log.go:172] (0xc0003fe460) (5) Data frame sent\nI0514 13:16:10.608056 995 log.go:172] (0xc0009762c0) Data frame received for 5\nI0514 13:16:10.608068 995 log.go:172] (0xc0003fe460) (5) Data frame handling\nI0514 13:16:10.608077 995 log.go:172] (0xc0003fe460) (5) Data frame sent\nI0514 13:16:10.635519 995 log.go:172] (0xc0009762c0) Data frame received for 7\nI0514 13:16:10.635572 995 log.go:172] (0xc0002b2000) (7) Data frame handling\nI0514 13:16:10.635601 995 log.go:172] (0xc0009762c0) Data frame received for 5\nI0514 13:16:10.635638 995 log.go:172] (0xc0003fe460) (5) Data frame handling\nI0514 13:16:10.636241 995 log.go:172] (0xc0009762c0) Data frame received for 1\nI0514 13:16:10.636274 995 log.go:172] (0xc0003fe320) (1) Data frame handling\nI0514 13:16:10.636304 995 log.go:172] (0xc0003fe320) (1) Data frame sent\nI0514 13:16:10.636336 995 log.go:172] (0xc0009762c0) (0xc0003fe3c0) Stream removed, broadcasting: 3\nI0514 13:16:10.636401 995 log.go:172] (0xc0009762c0) (0xc0003fe320) Stream removed, broadcasting: 1\nI0514 13:16:10.636560 995 log.go:172] (0xc0009762c0) (0xc0003fe320) Stream removed, broadcasting: 1\nI0514 13:16:10.636588 995 log.go:172] (0xc0009762c0) (0xc0003fe3c0) Stream removed, broadcasting: 3\nI0514 13:16:10.636611 995 log.go:172] (0xc0009762c0) (0xc0003fe460) Stream removed, broadcasting: 5\nI0514 13:16:10.636937 995 log.go:172] (0xc0009762c0) Go away received\nI0514 13:16:10.636969 995 log.go:172] (0xc0009762c0) (0xc0002b2000) Stream removed, broadcasting: 7\n" May 14 13:16:10.679: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:16:12.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5578" for this suite. May 14 13:16:18.707: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:16:18.793: INFO: namespace kubectl-5578 deletion completed in 6.102860223s • [SLOW TEST:12.368 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:16:18.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:16:24.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1633" for this suite. May 14 13:16:30.504: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:16:30.578: INFO: namespace watch-1633 deletion completed in 6.17692606s • [SLOW TEST:11.785 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:16:30.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-4891a220-4f65-4cfb-81fa-eaf977cc2b63 in namespace container-probe-7434 May 14 13:16:34.673: INFO: Started pod liveness-4891a220-4f65-4cfb-81fa-eaf977cc2b63 in namespace container-probe-7434 STEP: checking the pod's current state and verifying that restartCount is present May 14 13:16:34.676: INFO: Initial restart count of pod liveness-4891a220-4f65-4cfb-81fa-eaf977cc2b63 is 0 May 14 13:16:52.716: INFO: Restart count of pod container-probe-7434/liveness-4891a220-4f65-4cfb-81fa-eaf977cc2b63 is now 1 (18.039159654s elapsed) May 14 13:17:12.766: INFO: Restart count of pod container-probe-7434/liveness-4891a220-4f65-4cfb-81fa-eaf977cc2b63 is now 2 (38.089134033s elapsed) May 14 13:17:32.809: INFO: Restart count of pod container-probe-7434/liveness-4891a220-4f65-4cfb-81fa-eaf977cc2b63 is now 3 (58.132116231s elapsed) May 14 13:17:52.845: INFO: Restart count of pod container-probe-7434/liveness-4891a220-4f65-4cfb-81fa-eaf977cc2b63 is now 4 (1m18.168420686s elapsed) May 14 13:18:52.974: INFO: Restart count of pod container-probe-7434/liveness-4891a220-4f65-4cfb-81fa-eaf977cc2b63 is now 5 (2m18.297977456s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:18:52.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7434" for this suite. May 14 13:18:59.016: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:18:59.092: INFO: namespace container-probe-7434 deletion completed in 6.097282472s • [SLOW TEST:148.513 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:18:59.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting the proxy server May 14 13:18:59.175: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:18:59.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8421" for this suite. May 14 13:19:05.267: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:19:05.334: INFO: namespace kubectl-8421 deletion completed in 6.080529275s • [SLOW TEST:6.242 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:19:05.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's args May 14 13:19:05.432: INFO: Waiting up to 5m0s for pod "var-expansion-1455daa2-20c7-46cf-a412-96652a1fb439" in namespace "var-expansion-2512" to be "success or failure" May 14 13:19:05.505: INFO: Pod "var-expansion-1455daa2-20c7-46cf-a412-96652a1fb439": Phase="Pending", Reason="", readiness=false. Elapsed: 73.353817ms May 14 13:19:07.512: INFO: Pod "var-expansion-1455daa2-20c7-46cf-a412-96652a1fb439": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079909372s May 14 13:19:09.516: INFO: Pod "var-expansion-1455daa2-20c7-46cf-a412-96652a1fb439": Phase="Running", Reason="", readiness=true. Elapsed: 4.083915969s May 14 13:19:11.520: INFO: Pod "var-expansion-1455daa2-20c7-46cf-a412-96652a1fb439": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.088167993s STEP: Saw pod success May 14 13:19:11.520: INFO: Pod "var-expansion-1455daa2-20c7-46cf-a412-96652a1fb439" satisfied condition "success or failure" May 14 13:19:11.523: INFO: Trying to get logs from node iruya-worker pod var-expansion-1455daa2-20c7-46cf-a412-96652a1fb439 container dapi-container: STEP: delete the pod May 14 13:19:11.556: INFO: Waiting for pod var-expansion-1455daa2-20c7-46cf-a412-96652a1fb439 to disappear May 14 13:19:11.569: INFO: Pod var-expansion-1455daa2-20c7-46cf-a412-96652a1fb439 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:19:11.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2512" for this suite. May 14 13:19:17.583: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:19:17.659: INFO: namespace var-expansion-2512 deletion completed in 6.086183346s • [SLOW TEST:12.325 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:19:17.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium May 14 13:19:17.863: INFO: Waiting up to 5m0s for pod "pod-7870d66a-2189-4130-aaf8-6a03780f8823" in namespace "emptydir-5483" to be "success or failure" May 14 13:19:17.899: INFO: Pod "pod-7870d66a-2189-4130-aaf8-6a03780f8823": Phase="Pending", Reason="", readiness=false. Elapsed: 35.709286ms May 14 13:19:19.939: INFO: Pod "pod-7870d66a-2189-4130-aaf8-6a03780f8823": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07567209s May 14 13:19:21.943: INFO: Pod "pod-7870d66a-2189-4130-aaf8-6a03780f8823": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.079044713s STEP: Saw pod success May 14 13:19:21.943: INFO: Pod "pod-7870d66a-2189-4130-aaf8-6a03780f8823" satisfied condition "success or failure" May 14 13:19:21.945: INFO: Trying to get logs from node iruya-worker2 pod pod-7870d66a-2189-4130-aaf8-6a03780f8823 container test-container: STEP: delete the pod May 14 13:19:21.965: INFO: Waiting for pod pod-7870d66a-2189-4130-aaf8-6a03780f8823 to disappear May 14 13:19:21.970: INFO: Pod pod-7870d66a-2189-4130-aaf8-6a03780f8823 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:19:21.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5483" for this suite. May 14 13:19:27.992: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:19:28.054: INFO: namespace emptydir-5483 deletion completed in 6.081253374s • [SLOW TEST:10.394 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:19:28.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars May 14 13:19:28.139: INFO: Waiting up to 5m0s for pod "downward-api-2056528f-449a-472e-94dc-d131fab16849" in namespace "downward-api-2239" to be "success or failure" May 14 13:19:28.144: INFO: Pod "downward-api-2056528f-449a-472e-94dc-d131fab16849": Phase="Pending", Reason="", readiness=false. Elapsed: 4.899177ms May 14 13:19:30.148: INFO: Pod "downward-api-2056528f-449a-472e-94dc-d131fab16849": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008736494s May 14 13:19:32.152: INFO: Pod "downward-api-2056528f-449a-472e-94dc-d131fab16849": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012437587s STEP: Saw pod success May 14 13:19:32.152: INFO: Pod "downward-api-2056528f-449a-472e-94dc-d131fab16849" satisfied condition "success or failure" May 14 13:19:32.155: INFO: Trying to get logs from node iruya-worker pod downward-api-2056528f-449a-472e-94dc-d131fab16849 container dapi-container: STEP: delete the pod May 14 13:19:32.391: INFO: Waiting for pod downward-api-2056528f-449a-472e-94dc-d131fab16849 to disappear May 14 13:19:32.395: INFO: Pod downward-api-2056528f-449a-472e-94dc-d131fab16849 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:19:32.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2239" for this suite. May 14 13:19:38.407: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:19:38.480: INFO: namespace downward-api-2239 deletion completed in 6.082055168s • [SLOW TEST:10.426 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:19:38.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-9f90d6bc-74c1-4132-9312-008c5d313e7b STEP: Creating configMap with name cm-test-opt-upd-a8912347-d2c4-4a23-9713-0857e1bc40f8 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-9f90d6bc-74c1-4132-9312-008c5d313e7b STEP: Updating configmap cm-test-opt-upd-a8912347-d2c4-4a23-9713-0857e1bc40f8 STEP: Creating configMap with name cm-test-opt-create-3f087239-bbf5-4dfe-8884-7eab0c42fe7e STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:21:09.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7688" for this suite. May 14 13:21:33.312: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:21:33.459: INFO: namespace configmap-7688 deletion completed in 24.172583605s • [SLOW TEST:114.979 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:21:33.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 14 13:21:33.519: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota May 14 13:21:35.672: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:21:36.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1891" for this suite. May 14 13:21:44.976: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:21:45.045: INFO: namespace replication-controller-1891 deletion completed in 8.32772019s • [SLOW TEST:11.585 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:21:45.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 14 13:21:45.108: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e4cad7f3-993f-43fa-91ea-5c276256fd87" in namespace "projected-3628" to be "success or failure" May 14 13:21:45.135: INFO: Pod "downwardapi-volume-e4cad7f3-993f-43fa-91ea-5c276256fd87": Phase="Pending", Reason="", readiness=false. Elapsed: 26.841992ms May 14 13:21:47.139: INFO: Pod "downwardapi-volume-e4cad7f3-993f-43fa-91ea-5c276256fd87": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030888386s May 14 13:21:49.145: INFO: Pod "downwardapi-volume-e4cad7f3-993f-43fa-91ea-5c276256fd87": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036643483s STEP: Saw pod success May 14 13:21:49.145: INFO: Pod "downwardapi-volume-e4cad7f3-993f-43fa-91ea-5c276256fd87" satisfied condition "success or failure" May 14 13:21:49.147: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-e4cad7f3-993f-43fa-91ea-5c276256fd87 container client-container: STEP: delete the pod May 14 13:21:49.160: INFO: Waiting for pod downwardapi-volume-e4cad7f3-993f-43fa-91ea-5c276256fd87 to disappear May 14 13:21:49.187: INFO: Pod downwardapi-volume-e4cad7f3-993f-43fa-91ea-5c276256fd87 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:21:49.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3628" for this suite. May 14 13:21:55.205: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:21:55.284: INFO: namespace projected-3628 deletion completed in 6.093862863s • [SLOW TEST:10.239 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:21:55.285: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-8bdc6fd2-883e-4125-bf36-edd437f61577 STEP: Creating a pod to test consume configMaps May 14 13:21:55.368: INFO: Waiting up to 5m0s for pod "pod-configmaps-cac54781-c207-4727-b174-301606fff8ca" in namespace "configmap-2611" to be "success or failure" May 14 13:21:55.388: INFO: Pod "pod-configmaps-cac54781-c207-4727-b174-301606fff8ca": Phase="Pending", Reason="", readiness=false. Elapsed: 19.120267ms May 14 13:21:57.391: INFO: Pod "pod-configmaps-cac54781-c207-4727-b174-301606fff8ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022891812s May 14 13:21:59.396: INFO: Pod "pod-configmaps-cac54781-c207-4727-b174-301606fff8ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027104725s STEP: Saw pod success May 14 13:21:59.396: INFO: Pod "pod-configmaps-cac54781-c207-4727-b174-301606fff8ca" satisfied condition "success or failure" May 14 13:21:59.399: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-cac54781-c207-4727-b174-301606fff8ca container configmap-volume-test: STEP: delete the pod May 14 13:21:59.549: INFO: Waiting for pod pod-configmaps-cac54781-c207-4727-b174-301606fff8ca to disappear May 14 13:21:59.560: INFO: Pod pod-configmaps-cac54781-c207-4727-b174-301606fff8ca no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:21:59.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2611" for this suite. May 14 13:22:05.587: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:22:05.694: INFO: namespace configmap-2611 deletion completed in 6.129566139s • [SLOW TEST:10.409 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:22:05.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-projected-all-test-volume-fb402b57-b7da-4991-ab90-f1d940a790fe STEP: Creating secret with name secret-projected-all-test-volume-4d3a9918-0e60-4521-a16b-c908221af7ee STEP: Creating a pod to test Check all projections for projected volume plugin May 14 13:22:05.820: INFO: Waiting up to 5m0s for pod "projected-volume-82872864-907f-47dd-8adf-5820f269ce30" in namespace "projected-9040" to be "success or failure" May 14 13:22:05.835: INFO: Pod "projected-volume-82872864-907f-47dd-8adf-5820f269ce30": Phase="Pending", Reason="", readiness=false. Elapsed: 14.082663ms May 14 13:22:07.985: INFO: Pod "projected-volume-82872864-907f-47dd-8adf-5820f269ce30": Phase="Pending", Reason="", readiness=false. Elapsed: 2.164427958s May 14 13:22:09.989: INFO: Pod "projected-volume-82872864-907f-47dd-8adf-5820f269ce30": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.16904847s STEP: Saw pod success May 14 13:22:09.990: INFO: Pod "projected-volume-82872864-907f-47dd-8adf-5820f269ce30" satisfied condition "success or failure" May 14 13:22:09.993: INFO: Trying to get logs from node iruya-worker pod projected-volume-82872864-907f-47dd-8adf-5820f269ce30 container projected-all-volume-test: STEP: delete the pod May 14 13:22:10.010: INFO: Waiting for pod projected-volume-82872864-907f-47dd-8adf-5820f269ce30 to disappear May 14 13:22:10.014: INFO: Pod projected-volume-82872864-907f-47dd-8adf-5820f269ce30 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:22:10.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9040" for this suite. May 14 13:22:16.031: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:22:16.095: INFO: namespace projected-9040 deletion completed in 6.077469011s • [SLOW TEST:10.400 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:22:16.095: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:23:16.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2278" for this suite. May 14 13:23:38.216: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:23:38.297: INFO: namespace container-probe-2278 deletion completed in 22.097608443s • [SLOW TEST:82.202 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:23:38.298: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs May 14 13:23:38.353: INFO: Waiting up to 5m0s for pod "pod-32887267-983a-4f01-8adc-d39d22c7a65f" in namespace "emptydir-6513" to be "success or failure" May 14 13:23:38.418: INFO: Pod "pod-32887267-983a-4f01-8adc-d39d22c7a65f": Phase="Pending", Reason="", readiness=false. Elapsed: 64.891949ms May 14 13:23:40.515: INFO: Pod "pod-32887267-983a-4f01-8adc-d39d22c7a65f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.161726175s May 14 13:23:42.519: INFO: Pod "pod-32887267-983a-4f01-8adc-d39d22c7a65f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.165938229s STEP: Saw pod success May 14 13:23:42.519: INFO: Pod "pod-32887267-983a-4f01-8adc-d39d22c7a65f" satisfied condition "success or failure" May 14 13:23:42.522: INFO: Trying to get logs from node iruya-worker pod pod-32887267-983a-4f01-8adc-d39d22c7a65f container test-container: STEP: delete the pod May 14 13:23:42.552: INFO: Waiting for pod pod-32887267-983a-4f01-8adc-d39d22c7a65f to disappear May 14 13:23:42.568: INFO: Pod pod-32887267-983a-4f01-8adc-d39d22c7a65f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:23:42.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6513" for this suite. May 14 13:23:48.667: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:23:48.744: INFO: namespace emptydir-6513 deletion completed in 6.173056651s • [SLOW TEST:10.447 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:23:48.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 14 13:23:48.828: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c368e4e2-7f89-4d7d-8fd7-b1e8d0f0a7f3" in namespace "projected-8361" to be "success or failure" May 14 13:23:48.858: INFO: Pod "downwardapi-volume-c368e4e2-7f89-4d7d-8fd7-b1e8d0f0a7f3": Phase="Pending", Reason="", readiness=false. Elapsed: 30.104872ms May 14 13:23:50.861: INFO: Pod "downwardapi-volume-c368e4e2-7f89-4d7d-8fd7-b1e8d0f0a7f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032754053s May 14 13:23:52.871: INFO: Pod "downwardapi-volume-c368e4e2-7f89-4d7d-8fd7-b1e8d0f0a7f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043332839s STEP: Saw pod success May 14 13:23:52.872: INFO: Pod "downwardapi-volume-c368e4e2-7f89-4d7d-8fd7-b1e8d0f0a7f3" satisfied condition "success or failure" May 14 13:23:52.874: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-c368e4e2-7f89-4d7d-8fd7-b1e8d0f0a7f3 container client-container: STEP: delete the pod May 14 13:23:53.149: INFO: Waiting for pod downwardapi-volume-c368e4e2-7f89-4d7d-8fd7-b1e8d0f0a7f3 to disappear May 14 13:23:53.345: INFO: Pod downwardapi-volume-c368e4e2-7f89-4d7d-8fd7-b1e8d0f0a7f3 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:23:53.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8361" for this suite. May 14 13:23:59.403: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:23:59.476: INFO: namespace projected-8361 deletion completed in 6.127079933s • [SLOW TEST:10.731 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:23:59.477: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 14 13:23:59.557: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-3630' May 14 13:24:02.677: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 14 13:24:02.677: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc May 14 13:24:02.693: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-98t4q] May 14 13:24:02.693: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-98t4q" in namespace "kubectl-3630" to be "running and ready" May 14 13:24:02.717: INFO: Pod "e2e-test-nginx-rc-98t4q": Phase="Pending", Reason="", readiness=false. Elapsed: 24.124007ms May 14 13:24:04.720: INFO: Pod "e2e-test-nginx-rc-98t4q": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027512207s May 14 13:24:06.725: INFO: Pod "e2e-test-nginx-rc-98t4q": Phase="Running", Reason="", readiness=true. Elapsed: 4.032300912s May 14 13:24:06.725: INFO: Pod "e2e-test-nginx-rc-98t4q" satisfied condition "running and ready" May 14 13:24:06.725: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-98t4q] May 14 13:24:06.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-3630' May 14 13:24:06.962: INFO: stderr: "" May 14 13:24:06.962: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461 May 14 13:24:06.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-3630' May 14 13:24:07.088: INFO: stderr: "" May 14 13:24:07.088: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:24:07.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3630" for this suite. May 14 13:24:29.103: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:24:29.193: INFO: namespace kubectl-3630 deletion completed in 22.101211823s • [SLOW TEST:29.716 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:24:29.193: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0514 13:25:09.491832 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 14 13:25:09.491: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:25:09.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1367" for this suite. May 14 13:25:27.817: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:25:27.887: INFO: namespace gc-1367 deletion completed in 18.387510411s • [SLOW TEST:58.694 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:25:27.887: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-a141e044-fd7f-44f0-8ff8-03f83e98e4f3 STEP: Creating secret with name s-test-opt-upd-a9c6184a-bc7d-4533-8b9b-5a9160a9e457 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-a141e044-fd7f-44f0-8ff8-03f83e98e4f3 STEP: Updating secret s-test-opt-upd-a9c6184a-bc7d-4533-8b9b-5a9160a9e457 STEP: Creating secret with name s-test-opt-create-f331b8e4-9d8b-48b3-86e8-12ce75bc3301 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:25:44.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8432" for this suite. May 14 13:26:06.844: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:26:06.915: INFO: namespace projected-8432 deletion completed in 22.090109348s • [SLOW TEST:39.028 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:26:06.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 14 13:26:07.174: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"1b9df4ba-7f38-41ea-90db-3c4a713f1a38", Controller:(*bool)(0xc002b0f232), BlockOwnerDeletion:(*bool)(0xc002b0f233)}} May 14 13:26:07.180: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"3d94c996-4543-47fe-b22f-64b7f7e8beee", Controller:(*bool)(0xc002624dea), BlockOwnerDeletion:(*bool)(0xc002624deb)}} May 14 13:26:07.221: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"a4b8fe39-a791-4ee8-abb7-78f7ca29bcf2", Controller:(*bool)(0xc002b0f3ea), BlockOwnerDeletion:(*bool)(0xc002b0f3eb)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:26:12.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-26" for this suite. May 14 13:26:18.365: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:26:18.441: INFO: namespace gc-26 deletion completed in 6.120732128s • [SLOW TEST:11.526 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:26:18.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod May 14 13:26:18.513: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:26:27.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-245" for this suite. May 14 13:26:49.179: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:26:49.253: INFO: namespace init-container-245 deletion completed in 22.087185441s • [SLOW TEST:30.811 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:26:49.253: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating replication controller svc-latency-rc in namespace svc-latency-3409 I0514 13:26:49.373012 6 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-3409, replica count: 1 I0514 13:26:50.423577 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0514 13:26:51.423770 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0514 13:26:52.423978 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0514 13:26:53.424110 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 14 13:26:53.554: INFO: Created: latency-svc-9r444 May 14 13:26:53.579: INFO: Got endpoints: latency-svc-9r444 [55.010091ms] May 14 13:26:53.800: INFO: Created: latency-svc-956tg May 14 13:26:53.910: INFO: Got endpoints: latency-svc-956tg [331.090134ms] May 14 13:26:53.938: INFO: Created: latency-svc-rhpj8 May 14 13:26:53.966: INFO: Got endpoints: latency-svc-rhpj8 [386.67569ms] May 14 13:26:54.103: INFO: Created: latency-svc-lzc65 May 14 13:26:54.107: INFO: Got endpoints: latency-svc-lzc65 [528.125675ms] May 14 13:26:54.178: INFO: Created: latency-svc-q6zh6 May 14 13:26:54.240: INFO: Got endpoints: latency-svc-q6zh6 [660.646238ms] May 14 13:26:54.310: INFO: Created: latency-svc-h975z May 14 13:26:54.323: INFO: Got endpoints: latency-svc-h975z [744.446663ms] May 14 13:26:54.479: INFO: Created: latency-svc-pnjlv May 14 13:26:54.544: INFO: Got endpoints: latency-svc-pnjlv [965.185958ms] May 14 13:26:54.545: INFO: Created: latency-svc-f8qq9 May 14 13:26:54.665: INFO: Got endpoints: latency-svc-f8qq9 [1.085900393s] May 14 13:26:54.669: INFO: Created: latency-svc-vq54r May 14 13:26:54.704: INFO: Got endpoints: latency-svc-vq54r [1.125345611s] May 14 13:26:54.736: INFO: Created: latency-svc-g2zxm May 14 13:26:54.765: INFO: Got endpoints: latency-svc-g2zxm [1.186476839s] May 14 13:26:54.832: INFO: Created: latency-svc-hvrf6 May 14 13:26:54.861: INFO: Got endpoints: latency-svc-hvrf6 [1.281568723s] May 14 13:26:54.890: INFO: Created: latency-svc-6xf7g May 14 13:26:54.970: INFO: Got endpoints: latency-svc-6xf7g [1.391171071s] May 14 13:26:55.000: INFO: Created: latency-svc-977tf May 14 13:26:55.014: INFO: Got endpoints: latency-svc-977tf [1.435420366s] May 14 13:26:55.048: INFO: Created: latency-svc-hst8q May 14 13:26:55.154: INFO: Got endpoints: latency-svc-hst8q [1.574743334s] May 14 13:26:55.157: INFO: Created: latency-svc-rgsmp May 14 13:26:55.164: INFO: Got endpoints: latency-svc-rgsmp [1.58498268s] May 14 13:26:55.193: INFO: Created: latency-svc-cfwzz May 14 13:26:55.213: INFO: Got endpoints: latency-svc-cfwzz [1.634428977s] May 14 13:26:55.234: INFO: Created: latency-svc-bjhph May 14 13:26:55.249: INFO: Got endpoints: latency-svc-bjhph [1.339257007s] May 14 13:26:55.318: INFO: Created: latency-svc-p8284 May 14 13:26:55.342: INFO: Got endpoints: latency-svc-p8284 [1.376880829s] May 14 13:26:55.345: INFO: Created: latency-svc-xs7pt May 14 13:26:55.372: INFO: Got endpoints: latency-svc-xs7pt [1.264767669s] May 14 13:26:55.403: INFO: Created: latency-svc-bjbbs May 14 13:26:55.413: INFO: Got endpoints: latency-svc-bjbbs [1.173454524s] May 14 13:26:55.474: INFO: Created: latency-svc-v8qwc May 14 13:26:55.478: INFO: Got endpoints: latency-svc-v8qwc [1.154342899s] May 14 13:26:55.516: INFO: Created: latency-svc-lvlzj May 14 13:26:55.532: INFO: Got endpoints: latency-svc-lvlzj [988.15555ms] May 14 13:26:55.564: INFO: Created: latency-svc-9z2cl May 14 13:26:55.653: INFO: Got endpoints: latency-svc-9z2cl [987.952973ms] May 14 13:26:55.656: INFO: Created: latency-svc-vlcsr May 14 13:26:55.658: INFO: Got endpoints: latency-svc-vlcsr [954.024175ms] May 14 13:26:55.690: INFO: Created: latency-svc-m56v4 May 14 13:26:55.704: INFO: Got endpoints: latency-svc-m56v4 [938.286686ms] May 14 13:26:55.738: INFO: Created: latency-svc-d29c8 May 14 13:26:55.756: INFO: Got endpoints: latency-svc-d29c8 [895.112975ms] May 14 13:26:55.823: INFO: Created: latency-svc-nkk9w May 14 13:26:55.836: INFO: Got endpoints: latency-svc-nkk9w [865.551254ms] May 14 13:26:55.858: INFO: Created: latency-svc-sqhtk May 14 13:26:55.878: INFO: Got endpoints: latency-svc-sqhtk [863.773049ms] May 14 13:26:55.918: INFO: Created: latency-svc-pqmp9 May 14 13:26:56.036: INFO: Got endpoints: latency-svc-pqmp9 [882.045757ms] May 14 13:26:56.039: INFO: Created: latency-svc-999bx May 14 13:26:56.047: INFO: Got endpoints: latency-svc-999bx [882.56568ms] May 14 13:26:56.097: INFO: Created: latency-svc-vgd9x May 14 13:26:56.125: INFO: Got endpoints: latency-svc-vgd9x [911.676466ms] May 14 13:26:56.164: INFO: Created: latency-svc-z7n8x May 14 13:26:56.206: INFO: Got endpoints: latency-svc-z7n8x [956.775399ms] May 14 13:26:56.259: INFO: Created: latency-svc-5842f May 14 13:26:56.299: INFO: Got endpoints: latency-svc-5842f [956.880391ms] May 14 13:26:56.331: INFO: Created: latency-svc-rlxzl May 14 13:26:56.360: INFO: Got endpoints: latency-svc-rlxzl [987.720413ms] May 14 13:26:56.443: INFO: Created: latency-svc-4bcb6 May 14 13:26:56.456: INFO: Got endpoints: latency-svc-4bcb6 [1.042459805s] May 14 13:26:56.475: INFO: Created: latency-svc-v7q9t May 14 13:26:56.492: INFO: Got endpoints: latency-svc-v7q9t [1.014129253s] May 14 13:26:56.517: INFO: Created: latency-svc-z8hbt May 14 13:26:56.535: INFO: Got endpoints: latency-svc-z8hbt [1.00239974s] May 14 13:26:56.588: INFO: Created: latency-svc-h7c28 May 14 13:26:56.591: INFO: Got endpoints: latency-svc-h7c28 [937.758678ms] May 14 13:26:56.657: INFO: Created: latency-svc-zptnr May 14 13:26:56.672: INFO: Got endpoints: latency-svc-zptnr [1.013793415s] May 14 13:26:56.751: INFO: Created: latency-svc-dmjnt May 14 13:26:56.757: INFO: Got endpoints: latency-svc-dmjnt [1.053296427s] May 14 13:26:56.806: INFO: Created: latency-svc-jqk7f May 14 13:26:56.823: INFO: Got endpoints: latency-svc-jqk7f [1.06712417s] May 14 13:26:56.842: INFO: Created: latency-svc-x7n29 May 14 13:26:56.898: INFO: Got endpoints: latency-svc-x7n29 [1.06243813s] May 14 13:26:56.908: INFO: Created: latency-svc-dzstz May 14 13:26:56.937: INFO: Got endpoints: latency-svc-dzstz [1.058890499s] May 14 13:26:56.967: INFO: Created: latency-svc-95lzl May 14 13:26:56.979: INFO: Got endpoints: latency-svc-95lzl [943.522409ms] May 14 13:26:57.042: INFO: Created: latency-svc-wqvh4 May 14 13:26:57.046: INFO: Got endpoints: latency-svc-wqvh4 [999.027325ms] May 14 13:26:57.083: INFO: Created: latency-svc-2sgqt May 14 13:26:57.106: INFO: Got endpoints: latency-svc-2sgqt [980.425631ms] May 14 13:26:57.186: INFO: Created: latency-svc-4bl6m May 14 13:26:57.190: INFO: Got endpoints: latency-svc-4bl6m [984.03742ms] May 14 13:26:57.225: INFO: Created: latency-svc-n4qqf May 14 13:26:57.239: INFO: Got endpoints: latency-svc-n4qqf [939.124668ms] May 14 13:26:57.262: INFO: Created: latency-svc-nvc5c May 14 13:26:57.274: INFO: Got endpoints: latency-svc-nvc5c [914.790368ms] May 14 13:26:57.334: INFO: Created: latency-svc-np9rb May 14 13:26:57.347: INFO: Got endpoints: latency-svc-np9rb [891.477428ms] May 14 13:26:57.372: INFO: Created: latency-svc-nhlk2 May 14 13:26:57.383: INFO: Got endpoints: latency-svc-nhlk2 [890.80599ms] May 14 13:26:57.405: INFO: Created: latency-svc-2qk9r May 14 13:26:57.473: INFO: Got endpoints: latency-svc-2qk9r [938.650808ms] May 14 13:26:57.476: INFO: Created: latency-svc-hlqq4 May 14 13:26:57.479: INFO: Got endpoints: latency-svc-hlqq4 [888.373419ms] May 14 13:26:57.513: INFO: Created: latency-svc-t8c7v May 14 13:26:57.528: INFO: Got endpoints: latency-svc-t8c7v [855.34538ms] May 14 13:26:57.561: INFO: Created: latency-svc-v6dkn May 14 13:26:57.599: INFO: Got endpoints: latency-svc-v6dkn [841.725392ms] May 14 13:26:57.610: INFO: Created: latency-svc-msgft May 14 13:26:57.624: INFO: Got endpoints: latency-svc-msgft [801.085322ms] May 14 13:26:57.645: INFO: Created: latency-svc-jwbbv May 14 13:26:57.660: INFO: Got endpoints: latency-svc-jwbbv [761.975798ms] May 14 13:26:57.687: INFO: Created: latency-svc-8x87f May 14 13:26:57.724: INFO: Got endpoints: latency-svc-8x87f [787.223236ms] May 14 13:26:57.737: INFO: Created: latency-svc-hvz8j May 14 13:26:57.751: INFO: Got endpoints: latency-svc-hvz8j [771.399561ms] May 14 13:26:57.796: INFO: Created: latency-svc-pwcjf May 14 13:26:57.825: INFO: Got endpoints: latency-svc-pwcjf [778.989818ms] May 14 13:26:57.886: INFO: Created: latency-svc-v8wvv May 14 13:26:57.902: INFO: Got endpoints: latency-svc-v8wvv [796.227772ms] May 14 13:26:57.940: INFO: Created: latency-svc-fq9v7 May 14 13:26:57.956: INFO: Got endpoints: latency-svc-fq9v7 [765.869692ms] May 14 13:26:58.024: INFO: Created: latency-svc-sxldl May 14 13:26:58.059: INFO: Got endpoints: latency-svc-sxldl [820.254663ms] May 14 13:26:58.060: INFO: Created: latency-svc-7nkcp May 14 13:26:58.077: INFO: Got endpoints: latency-svc-7nkcp [802.056753ms] May 14 13:26:58.119: INFO: Created: latency-svc-zw5z8 May 14 13:26:58.155: INFO: Got endpoints: latency-svc-zw5z8 [808.276484ms] May 14 13:26:58.174: INFO: Created: latency-svc-xfq42 May 14 13:26:58.191: INFO: Got endpoints: latency-svc-xfq42 [807.806087ms] May 14 13:26:58.216: INFO: Created: latency-svc-82n5q May 14 13:26:58.232: INFO: Got endpoints: latency-svc-82n5q [758.838177ms] May 14 13:26:58.317: INFO: Created: latency-svc-vfnnv May 14 13:26:58.323: INFO: Got endpoints: latency-svc-vfnnv [843.513998ms] May 14 13:26:58.378: INFO: Created: latency-svc-79zpw May 14 13:26:58.408: INFO: Got endpoints: latency-svc-79zpw [880.188124ms] May 14 13:26:58.485: INFO: Created: latency-svc-tcr2f May 14 13:26:58.521: INFO: Got endpoints: latency-svc-tcr2f [921.575747ms] May 14 13:26:58.522: INFO: Created: latency-svc-8wrvl May 14 13:26:58.534: INFO: Got endpoints: latency-svc-8wrvl [909.66036ms] May 14 13:26:58.558: INFO: Created: latency-svc-9cht2 May 14 13:26:58.570: INFO: Got endpoints: latency-svc-9cht2 [909.742624ms] May 14 13:26:58.623: INFO: Created: latency-svc-ddsxm May 14 13:26:58.641: INFO: Got endpoints: latency-svc-ddsxm [916.807467ms] May 14 13:26:58.641: INFO: Created: latency-svc-h56c5 May 14 13:26:58.655: INFO: Got endpoints: latency-svc-h56c5 [903.536356ms] May 14 13:26:58.677: INFO: Created: latency-svc-ccrbz May 14 13:26:58.691: INFO: Got endpoints: latency-svc-ccrbz [865.975796ms] May 14 13:26:58.713: INFO: Created: latency-svc-5r22l May 14 13:26:58.761: INFO: Got endpoints: latency-svc-5r22l [858.89361ms] May 14 13:26:58.791: INFO: Created: latency-svc-hrfcd May 14 13:26:58.805: INFO: Got endpoints: latency-svc-hrfcd [848.99925ms] May 14 13:26:58.857: INFO: Created: latency-svc-vww59 May 14 13:26:58.890: INFO: Got endpoints: latency-svc-vww59 [830.60686ms] May 14 13:26:58.911: INFO: Created: latency-svc-gvxbh May 14 13:26:58.920: INFO: Got endpoints: latency-svc-gvxbh [842.924464ms] May 14 13:26:58.941: INFO: Created: latency-svc-vsnsp May 14 13:26:58.950: INFO: Got endpoints: latency-svc-vsnsp [794.465075ms] May 14 13:26:58.977: INFO: Created: latency-svc-dkg8f May 14 13:26:59.030: INFO: Got endpoints: latency-svc-dkg8f [839.425415ms] May 14 13:26:59.034: INFO: Created: latency-svc-xj8hn May 14 13:26:59.040: INFO: Got endpoints: latency-svc-xj8hn [807.753743ms] May 14 13:26:59.061: INFO: Created: latency-svc-4ll8h May 14 13:26:59.090: INFO: Got endpoints: latency-svc-4ll8h [767.318021ms] May 14 13:26:59.109: INFO: Created: latency-svc-9r5fq May 14 13:26:59.125: INFO: Got endpoints: latency-svc-9r5fq [717.491346ms] May 14 13:26:59.205: INFO: Created: latency-svc-xc8d6 May 14 13:26:59.208: INFO: Got endpoints: latency-svc-xc8d6 [686.764943ms] May 14 13:26:59.242: INFO: Created: latency-svc-shjh6 May 14 13:26:59.258: INFO: Got endpoints: latency-svc-shjh6 [723.55174ms] May 14 13:26:59.290: INFO: Created: latency-svc-92kvq May 14 13:26:59.329: INFO: Got endpoints: latency-svc-92kvq [759.218629ms] May 14 13:26:59.355: INFO: Created: latency-svc-k6d2n May 14 13:26:59.373: INFO: Got endpoints: latency-svc-k6d2n [731.889161ms] May 14 13:26:59.391: INFO: Created: latency-svc-hfw68 May 14 13:26:59.402: INFO: Got endpoints: latency-svc-hfw68 [747.555375ms] May 14 13:26:59.422: INFO: Created: latency-svc-h4blp May 14 13:26:59.455: INFO: Got endpoints: latency-svc-h4blp [764.332216ms] May 14 13:26:59.475: INFO: Created: latency-svc-wqrtq May 14 13:26:59.493: INFO: Got endpoints: latency-svc-wqrtq [731.913082ms] May 14 13:26:59.512: INFO: Created: latency-svc-g4bls May 14 13:26:59.523: INFO: Got endpoints: latency-svc-g4bls [717.549312ms] May 14 13:26:59.541: INFO: Created: latency-svc-jw5hd May 14 13:26:59.611: INFO: Got endpoints: latency-svc-jw5hd [721.156781ms] May 14 13:26:59.632: INFO: Created: latency-svc-7l5mw May 14 13:26:59.661: INFO: Got endpoints: latency-svc-7l5mw [741.000893ms] May 14 13:26:59.691: INFO: Created: latency-svc-qnxxq May 14 13:26:59.753: INFO: Got endpoints: latency-svc-qnxxq [802.89781ms] May 14 13:26:59.760: INFO: Created: latency-svc-d592b May 14 13:26:59.770: INFO: Got endpoints: latency-svc-d592b [740.041545ms] May 14 13:26:59.788: INFO: Created: latency-svc-289wx May 14 13:26:59.800: INFO: Got endpoints: latency-svc-289wx [759.901611ms] May 14 13:26:59.824: INFO: Created: latency-svc-w5d4n May 14 13:26:59.836: INFO: Got endpoints: latency-svc-w5d4n [66.073144ms] May 14 13:26:59.900: INFO: Created: latency-svc-vfcmd May 14 13:26:59.937: INFO: Created: latency-svc-2xtgn May 14 13:26:59.937: INFO: Got endpoints: latency-svc-vfcmd [847.301826ms] May 14 13:26:59.975: INFO: Got endpoints: latency-svc-2xtgn [849.51883ms] May 14 13:26:59.998: INFO: Created: latency-svc-gp8dp May 14 13:27:00.049: INFO: Got endpoints: latency-svc-gp8dp [841.310342ms] May 14 13:27:00.070: INFO: Created: latency-svc-jbs42 May 14 13:27:00.089: INFO: Got endpoints: latency-svc-jbs42 [831.824089ms] May 14 13:27:00.148: INFO: Created: latency-svc-gsdx8 May 14 13:27:00.192: INFO: Got endpoints: latency-svc-gsdx8 [862.263997ms] May 14 13:27:00.226: INFO: Created: latency-svc-7b8wt May 14 13:27:00.348: INFO: Got endpoints: latency-svc-7b8wt [974.6065ms] May 14 13:27:00.387: INFO: Created: latency-svc-4f5fm May 14 13:27:00.401: INFO: Got endpoints: latency-svc-4f5fm [998.845113ms] May 14 13:27:00.504: INFO: Created: latency-svc-4pmpf May 14 13:27:00.507: INFO: Got endpoints: latency-svc-4pmpf [1.051934361s] May 14 13:27:00.561: INFO: Created: latency-svc-b5t4s May 14 13:27:00.576: INFO: Got endpoints: latency-svc-b5t4s [1.082779806s] May 14 13:27:00.600: INFO: Created: latency-svc-95xjk May 14 13:27:00.653: INFO: Got endpoints: latency-svc-95xjk [1.129882517s] May 14 13:27:00.672: INFO: Created: latency-svc-prddx May 14 13:27:00.685: INFO: Got endpoints: latency-svc-prddx [1.07394128s] May 14 13:27:00.711: INFO: Created: latency-svc-hp5rn May 14 13:27:00.721: INFO: Got endpoints: latency-svc-hp5rn [1.060351499s] May 14 13:27:00.749: INFO: Created: latency-svc-9rkrr May 14 13:27:00.803: INFO: Got endpoints: latency-svc-9rkrr [1.049822244s] May 14 13:27:00.814: INFO: Created: latency-svc-dhv5f May 14 13:27:00.836: INFO: Got endpoints: latency-svc-dhv5f [1.035582505s] May 14 13:27:00.885: INFO: Created: latency-svc-mxlgp May 14 13:27:00.902: INFO: Got endpoints: latency-svc-mxlgp [1.065723739s] May 14 13:27:01.030: INFO: Created: latency-svc-xbnll May 14 13:27:01.046: INFO: Got endpoints: latency-svc-xbnll [1.108604696s] May 14 13:27:01.123: INFO: Created: latency-svc-w49ch May 14 13:27:01.130: INFO: Got endpoints: latency-svc-w49ch [1.155088338s] May 14 13:27:01.155: INFO: Created: latency-svc-84bv5 May 14 13:27:01.173: INFO: Got endpoints: latency-svc-84bv5 [1.12395648s] May 14 13:27:01.288: INFO: Created: latency-svc-qg72n May 14 13:27:01.291: INFO: Got endpoints: latency-svc-qg72n [1.201034183s] May 14 13:27:01.353: INFO: Created: latency-svc-8rmhm May 14 13:27:01.371: INFO: Got endpoints: latency-svc-8rmhm [1.179616286s] May 14 13:27:01.444: INFO: Created: latency-svc-vs84v May 14 13:27:01.450: INFO: Got endpoints: latency-svc-vs84v [1.10201446s] May 14 13:27:01.486: INFO: Created: latency-svc-mlsxw May 14 13:27:01.504: INFO: Got endpoints: latency-svc-mlsxw [1.103143255s] May 14 13:27:01.527: INFO: Created: latency-svc-t2vjg May 14 13:27:01.540: INFO: Got endpoints: latency-svc-t2vjg [1.03315918s] May 14 13:27:01.587: INFO: Created: latency-svc-d8mzp May 14 13:27:01.596: INFO: Got endpoints: latency-svc-d8mzp [1.019813793s] May 14 13:27:01.635: INFO: Created: latency-svc-xswwf May 14 13:27:01.661: INFO: Got endpoints: latency-svc-xswwf [1.008601669s] May 14 13:27:01.749: INFO: Created: latency-svc-x8qmj May 14 13:27:01.752: INFO: Got endpoints: latency-svc-x8qmj [1.066618407s] May 14 13:27:01.797: INFO: Created: latency-svc-r5n6v May 14 13:27:01.812: INFO: Got endpoints: latency-svc-r5n6v [1.090616045s] May 14 13:27:01.935: INFO: Created: latency-svc-tfp46 May 14 13:27:01.938: INFO: Got endpoints: latency-svc-tfp46 [1.135472763s] May 14 13:27:01.983: INFO: Created: latency-svc-xzhpk May 14 13:27:01.999: INFO: Got endpoints: latency-svc-xzhpk [1.163108166s] May 14 13:27:02.025: INFO: Created: latency-svc-5pr6b May 14 13:27:02.090: INFO: Got endpoints: latency-svc-5pr6b [1.188168776s] May 14 13:27:02.093: INFO: Created: latency-svc-m2p5g May 14 13:27:02.107: INFO: Got endpoints: latency-svc-m2p5g [1.061355571s] May 14 13:27:02.187: INFO: Created: latency-svc-bg6w6 May 14 13:27:02.246: INFO: Got endpoints: latency-svc-bg6w6 [1.115387694s] May 14 13:27:02.271: INFO: Created: latency-svc-lvnn8 May 14 13:27:02.288: INFO: Got endpoints: latency-svc-lvnn8 [1.114837849s] May 14 13:27:02.325: INFO: Created: latency-svc-ltp96 May 14 13:27:02.336: INFO: Got endpoints: latency-svc-ltp96 [1.045385599s] May 14 13:27:02.378: INFO: Created: latency-svc-c4p8p May 14 13:27:02.384: INFO: Got endpoints: latency-svc-c4p8p [1.012906277s] May 14 13:27:02.463: INFO: Created: latency-svc-d6m5d May 14 13:27:02.539: INFO: Got endpoints: latency-svc-d6m5d [1.089066508s] May 14 13:27:02.541: INFO: Created: latency-svc-jpxrf May 14 13:27:02.553: INFO: Got endpoints: latency-svc-jpxrf [1.048940214s] May 14 13:27:02.582: INFO: Created: latency-svc-kq9d4 May 14 13:27:02.596: INFO: Got endpoints: latency-svc-kq9d4 [1.055134894s] May 14 13:27:02.618: INFO: Created: latency-svc-jtd4k May 14 13:27:02.632: INFO: Got endpoints: latency-svc-jtd4k [1.036351618s] May 14 13:27:02.683: INFO: Created: latency-svc-ntt87 May 14 13:27:02.708: INFO: Got endpoints: latency-svc-ntt87 [1.046891465s] May 14 13:27:02.739: INFO: Created: latency-svc-nm5ct May 14 13:27:02.747: INFO: Got endpoints: latency-svc-nm5ct [995.039742ms] May 14 13:27:02.878: INFO: Created: latency-svc-qfkk7 May 14 13:27:02.907: INFO: Got endpoints: latency-svc-qfkk7 [1.094699116s] May 14 13:27:02.908: INFO: Created: latency-svc-l62gd May 14 13:27:02.960: INFO: Got endpoints: latency-svc-l62gd [1.021816263s] May 14 13:27:03.085: INFO: Created: latency-svc-hrjkz May 14 13:27:03.138: INFO: Got endpoints: latency-svc-hrjkz [1.139263378s] May 14 13:27:03.246: INFO: Created: latency-svc-27l7g May 14 13:27:03.282: INFO: Got endpoints: latency-svc-27l7g [1.191437924s] May 14 13:27:03.339: INFO: Created: latency-svc-m4fg8 May 14 13:27:03.395: INFO: Got endpoints: latency-svc-m4fg8 [1.287858576s] May 14 13:27:03.411: INFO: Created: latency-svc-pcxz7 May 14 13:27:03.426: INFO: Got endpoints: latency-svc-pcxz7 [1.180279997s] May 14 13:27:03.447: INFO: Created: latency-svc-h5vk4 May 14 13:27:03.462: INFO: Got endpoints: latency-svc-h5vk4 [1.174540859s] May 14 13:27:03.489: INFO: Created: latency-svc-x7wh9 May 14 13:27:03.545: INFO: Got endpoints: latency-svc-x7wh9 [1.208895196s] May 14 13:27:03.560: INFO: Created: latency-svc-cdtbl May 14 13:27:03.578: INFO: Got endpoints: latency-svc-cdtbl [1.19327814s] May 14 13:27:03.616: INFO: Created: latency-svc-vkgdp May 14 13:27:03.632: INFO: Got endpoints: latency-svc-vkgdp [1.092721787s] May 14 13:27:03.709: INFO: Created: latency-svc-4l64c May 14 13:27:03.712: INFO: Got endpoints: latency-svc-4l64c [1.158868781s] May 14 13:27:03.765: INFO: Created: latency-svc-knspk May 14 13:27:03.782: INFO: Got endpoints: latency-svc-knspk [1.186692395s] May 14 13:27:03.857: INFO: Created: latency-svc-kfv7x May 14 13:27:03.858: INFO: Got endpoints: latency-svc-kfv7x [1.22641928s] May 14 13:27:03.896: INFO: Created: latency-svc-6spr5 May 14 13:27:03.915: INFO: Got endpoints: latency-svc-6spr5 [1.206522235s] May 14 13:27:03.994: INFO: Created: latency-svc-sbzpf May 14 13:27:04.006: INFO: Got endpoints: latency-svc-sbzpf [1.258889266s] May 14 13:27:04.034: INFO: Created: latency-svc-rkppr May 14 13:27:04.054: INFO: Got endpoints: latency-svc-rkppr [1.147055547s] May 14 13:27:04.132: INFO: Created: latency-svc-fftfp May 14 13:27:04.135: INFO: Got endpoints: latency-svc-fftfp [1.174566273s] May 14 13:27:04.179: INFO: Created: latency-svc-wmbdb May 14 13:27:04.222: INFO: Got endpoints: latency-svc-wmbdb [1.084065855s] May 14 13:27:04.277: INFO: Created: latency-svc-jtzb2 May 14 13:27:04.282: INFO: Got endpoints: latency-svc-jtzb2 [1.00049822s] May 14 13:27:04.328: INFO: Created: latency-svc-jrbwg May 14 13:27:04.349: INFO: Got endpoints: latency-svc-jrbwg [953.539531ms] May 14 13:27:04.370: INFO: Created: latency-svc-qvx2r May 14 13:27:04.427: INFO: Got endpoints: latency-svc-qvx2r [1.001399367s] May 14 13:27:04.460: INFO: Created: latency-svc-d7z7s May 14 13:27:04.476: INFO: Got endpoints: latency-svc-d7z7s [1.013085545s] May 14 13:27:04.508: INFO: Created: latency-svc-gn4jx May 14 13:27:04.557: INFO: Got endpoints: latency-svc-gn4jx [1.011962786s] May 14 13:27:04.580: INFO: Created: latency-svc-m9mdd May 14 13:27:04.600: INFO: Got endpoints: latency-svc-m9mdd [1.022421026s] May 14 13:27:04.630: INFO: Created: latency-svc-26hdh May 14 13:27:04.633: INFO: Got endpoints: latency-svc-26hdh [1.001195905s] May 14 13:27:04.695: INFO: Created: latency-svc-mpdcz May 14 13:27:04.706: INFO: Got endpoints: latency-svc-mpdcz [993.90882ms] May 14 13:27:04.731: INFO: Created: latency-svc-klxpl May 14 13:27:04.742: INFO: Got endpoints: latency-svc-klxpl [959.259484ms] May 14 13:27:04.768: INFO: Created: latency-svc-mkj62 May 14 13:27:04.784: INFO: Got endpoints: latency-svc-mkj62 [925.787195ms] May 14 13:27:04.845: INFO: Created: latency-svc-4wwc5 May 14 13:27:04.874: INFO: Got endpoints: latency-svc-4wwc5 [959.418839ms] May 14 13:27:04.875: INFO: Created: latency-svc-kv9zw May 14 13:27:04.894: INFO: Got endpoints: latency-svc-kv9zw [888.268996ms] May 14 13:27:04.928: INFO: Created: latency-svc-fbz69 May 14 13:27:04.942: INFO: Got endpoints: latency-svc-fbz69 [887.991197ms] May 14 13:27:05.035: INFO: Created: latency-svc-bwjzk May 14 13:27:05.055: INFO: Got endpoints: latency-svc-bwjzk [920.58818ms] May 14 13:27:05.153: INFO: Created: latency-svc-95pgj May 14 13:27:05.175: INFO: Got endpoints: latency-svc-95pgj [952.794617ms] May 14 13:27:05.198: INFO: Created: latency-svc-gf9d4 May 14 13:27:05.233: INFO: Got endpoints: latency-svc-gf9d4 [950.763949ms] May 14 13:27:05.312: INFO: Created: latency-svc-t2fxr May 14 13:27:05.342: INFO: Got endpoints: latency-svc-t2fxr [992.944532ms] May 14 13:27:05.380: INFO: Created: latency-svc-mctt9 May 14 13:27:05.387: INFO: Got endpoints: latency-svc-mctt9 [959.172519ms] May 14 13:27:05.462: INFO: Created: latency-svc-b52fc May 14 13:27:05.465: INFO: Got endpoints: latency-svc-b52fc [988.960494ms] May 14 13:27:05.503: INFO: Created: latency-svc-94gxm May 14 13:27:05.519: INFO: Got endpoints: latency-svc-94gxm [962.306483ms] May 14 13:27:05.545: INFO: Created: latency-svc-js68j May 14 13:27:05.586: INFO: Got endpoints: latency-svc-js68j [986.278632ms] May 14 13:27:05.643: INFO: Created: latency-svc-ngwwc May 14 13:27:05.652: INFO: Got endpoints: latency-svc-ngwwc [1.018639016s] May 14 13:27:05.678: INFO: Created: latency-svc-4r5wq May 14 13:27:05.724: INFO: Got endpoints: latency-svc-4r5wq [1.018306316s] May 14 13:27:05.737: INFO: Created: latency-svc-27wmh May 14 13:27:05.755: INFO: Got endpoints: latency-svc-27wmh [1.013001011s] May 14 13:27:05.780: INFO: Created: latency-svc-ff7nr May 14 13:27:05.803: INFO: Got endpoints: latency-svc-ff7nr [1.018563306s] May 14 13:27:05.872: INFO: Created: latency-svc-kdgsn May 14 13:27:05.894: INFO: Got endpoints: latency-svc-kdgsn [1.019499244s] May 14 13:27:05.912: INFO: Created: latency-svc-q2q6z May 14 13:27:05.930: INFO: Got endpoints: latency-svc-q2q6z [1.035836326s] May 14 13:27:06.000: INFO: Created: latency-svc-bzhnz May 14 13:27:06.003: INFO: Got endpoints: latency-svc-bzhnz [1.060692089s] May 14 13:27:06.049: INFO: Created: latency-svc-d4mnx May 14 13:27:06.081: INFO: Got endpoints: latency-svc-d4mnx [1.025867043s] May 14 13:27:06.144: INFO: Created: latency-svc-cght9 May 14 13:27:06.159: INFO: Got endpoints: latency-svc-cght9 [983.525088ms] May 14 13:27:06.231: INFO: Created: latency-svc-bp2s8 May 14 13:27:06.329: INFO: Got endpoints: latency-svc-bp2s8 [1.095968316s] May 14 13:27:06.382: INFO: Created: latency-svc-kbwks May 14 13:27:06.417: INFO: Got endpoints: latency-svc-kbwks [1.075356077s] May 14 13:27:06.479: INFO: Created: latency-svc-6vqdb May 14 13:27:06.514: INFO: Got endpoints: latency-svc-6vqdb [1.127222732s] May 14 13:27:06.578: INFO: Created: latency-svc-pc6cz May 14 13:27:06.700: INFO: Got endpoints: latency-svc-pc6cz [1.235566167s] May 14 13:27:06.709: INFO: Created: latency-svc-z8dfr May 14 13:27:06.772: INFO: Got endpoints: latency-svc-z8dfr [1.252871894s] May 14 13:27:06.890: INFO: Created: latency-svc-vc9bg May 14 13:27:06.904: INFO: Got endpoints: latency-svc-vc9bg [1.317772094s] May 14 13:27:06.932: INFO: Created: latency-svc-h2kkr May 14 13:27:06.994: INFO: Got endpoints: latency-svc-h2kkr [1.342151556s] May 14 13:27:06.997: INFO: Created: latency-svc-gnv5s May 14 13:27:07.013: INFO: Got endpoints: latency-svc-gnv5s [1.288394767s] May 14 13:27:07.070: INFO: Created: latency-svc-2p52d May 14 13:27:07.192: INFO: Got endpoints: latency-svc-2p52d [1.43720356s] May 14 13:27:07.208: INFO: Created: latency-svc-lgpx8 May 14 13:27:07.224: INFO: Got endpoints: latency-svc-lgpx8 [1.420982254s] May 14 13:27:07.280: INFO: Created: latency-svc-kh2xg May 14 13:27:07.341: INFO: Got endpoints: latency-svc-kh2xg [1.447014862s] May 14 13:27:07.376: INFO: Created: latency-svc-cs8dz May 14 13:27:07.398: INFO: Got endpoints: latency-svc-cs8dz [1.468388507s] May 14 13:27:07.492: INFO: Created: latency-svc-vwwhk May 14 13:27:07.520: INFO: Got endpoints: latency-svc-vwwhk [1.517622214s] May 14 13:27:07.522: INFO: Created: latency-svc-mmwwc May 14 13:27:07.569: INFO: Got endpoints: latency-svc-mmwwc [1.487658325s] May 14 13:27:07.569: INFO: Latencies: [66.073144ms 331.090134ms 386.67569ms 528.125675ms 660.646238ms 686.764943ms 717.491346ms 717.549312ms 721.156781ms 723.55174ms 731.889161ms 731.913082ms 740.041545ms 741.000893ms 744.446663ms 747.555375ms 758.838177ms 759.218629ms 759.901611ms 761.975798ms 764.332216ms 765.869692ms 767.318021ms 771.399561ms 778.989818ms 787.223236ms 794.465075ms 796.227772ms 801.085322ms 802.056753ms 802.89781ms 807.753743ms 807.806087ms 808.276484ms 820.254663ms 830.60686ms 831.824089ms 839.425415ms 841.310342ms 841.725392ms 842.924464ms 843.513998ms 847.301826ms 848.99925ms 849.51883ms 855.34538ms 858.89361ms 862.263997ms 863.773049ms 865.551254ms 865.975796ms 880.188124ms 882.045757ms 882.56568ms 887.991197ms 888.268996ms 888.373419ms 890.80599ms 891.477428ms 895.112975ms 903.536356ms 909.66036ms 909.742624ms 911.676466ms 914.790368ms 916.807467ms 920.58818ms 921.575747ms 925.787195ms 937.758678ms 938.286686ms 938.650808ms 939.124668ms 943.522409ms 950.763949ms 952.794617ms 953.539531ms 954.024175ms 956.775399ms 956.880391ms 959.172519ms 959.259484ms 959.418839ms 962.306483ms 965.185958ms 974.6065ms 980.425631ms 983.525088ms 984.03742ms 986.278632ms 987.720413ms 987.952973ms 988.15555ms 988.960494ms 992.944532ms 993.90882ms 995.039742ms 998.845113ms 999.027325ms 1.00049822s 1.001195905s 1.001399367s 1.00239974s 1.008601669s 1.011962786s 1.012906277s 1.013001011s 1.013085545s 1.013793415s 1.014129253s 1.018306316s 1.018563306s 1.018639016s 1.019499244s 1.019813793s 1.021816263s 1.022421026s 1.025867043s 1.03315918s 1.035582505s 1.035836326s 1.036351618s 1.042459805s 1.045385599s 1.046891465s 1.048940214s 1.049822244s 1.051934361s 1.053296427s 1.055134894s 1.058890499s 1.060351499s 1.060692089s 1.061355571s 1.06243813s 1.065723739s 1.066618407s 1.06712417s 1.07394128s 1.075356077s 1.082779806s 1.084065855s 1.085900393s 1.089066508s 1.090616045s 1.092721787s 1.094699116s 1.095968316s 1.10201446s 1.103143255s 1.108604696s 1.114837849s 1.115387694s 1.12395648s 1.125345611s 1.127222732s 1.129882517s 1.135472763s 1.139263378s 1.147055547s 1.154342899s 1.155088338s 1.158868781s 1.163108166s 1.173454524s 1.174540859s 1.174566273s 1.179616286s 1.180279997s 1.186476839s 1.186692395s 1.188168776s 1.191437924s 1.19327814s 1.201034183s 1.206522235s 1.208895196s 1.22641928s 1.235566167s 1.252871894s 1.258889266s 1.264767669s 1.281568723s 1.287858576s 1.288394767s 1.317772094s 1.339257007s 1.342151556s 1.376880829s 1.391171071s 1.420982254s 1.435420366s 1.43720356s 1.447014862s 1.468388507s 1.487658325s 1.517622214s 1.574743334s 1.58498268s 1.634428977s] May 14 13:27:07.569: INFO: 50 %ile: 1.001195905s May 14 13:27:07.569: INFO: 90 %ile: 1.258889266s May 14 13:27:07.569: INFO: 99 %ile: 1.58498268s May 14 13:27:07.569: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:27:07.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-3409" for this suite. May 14 13:27:47.699: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:27:47.777: INFO: namespace svc-latency-3409 deletion completed in 40.190076603s • [SLOW TEST:58.523 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:27:47.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with configMap that has name projected-configmap-test-upd-3d849e41-784c-4346-9b6f-40e5c05fa33e STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-3d849e41-784c-4346-9b6f-40e5c05fa33e STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:27:53.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4919" for this suite. May 14 13:28:15.956: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:28:16.020: INFO: namespace projected-4919 deletion completed in 22.075682019s • [SLOW TEST:28.242 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:28:16.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 14 13:28:16.103: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-8192,SelfLink:/api/v1/namespaces/watch-8192/configmaps/e2e-watch-test-watch-closed,UID:8f97b1e0-99af-4663-94d2-27bfc48a6aa0,ResourceVersion:10858340,Generation:0,CreationTimestamp:2020-05-14 13:28:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} May 14 13:28:16.103: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-8192,SelfLink:/api/v1/namespaces/watch-8192/configmaps/e2e-watch-test-watch-closed,UID:8f97b1e0-99af-4663-94d2-27bfc48a6aa0,ResourceVersion:10858341,Generation:0,CreationTimestamp:2020-05-14 13:28:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 14 13:28:16.114: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-8192,SelfLink:/api/v1/namespaces/watch-8192/configmaps/e2e-watch-test-watch-closed,UID:8f97b1e0-99af-4663-94d2-27bfc48a6aa0,ResourceVersion:10858342,Generation:0,CreationTimestamp:2020-05-14 13:28:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 14 13:28:16.114: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-8192,SelfLink:/api/v1/namespaces/watch-8192/configmaps/e2e-watch-test-watch-closed,UID:8f97b1e0-99af-4663-94d2-27bfc48a6aa0,ResourceVersion:10858343,Generation:0,CreationTimestamp:2020-05-14 13:28:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:28:16.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8192" for this suite. May 14 13:28:22.157: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:28:22.246: INFO: namespace watch-8192 deletion completed in 6.107054095s • [SLOW TEST:6.227 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:28:22.247: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name projected-secret-test-217ff265-d712-4afe-8ff1-10da051def5a STEP: Creating a pod to test consume secrets May 14 13:28:22.348: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-50a2fd08-2ac5-430e-a920-f8050029b3f5" in namespace "projected-4963" to be "success or failure" May 14 13:28:22.370: INFO: Pod "pod-projected-secrets-50a2fd08-2ac5-430e-a920-f8050029b3f5": Phase="Pending", Reason="", readiness=false. Elapsed: 22.565735ms May 14 13:28:24.374: INFO: Pod "pod-projected-secrets-50a2fd08-2ac5-430e-a920-f8050029b3f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026305359s May 14 13:28:26.383: INFO: Pod "pod-projected-secrets-50a2fd08-2ac5-430e-a920-f8050029b3f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035424215s STEP: Saw pod success May 14 13:28:26.383: INFO: Pod "pod-projected-secrets-50a2fd08-2ac5-430e-a920-f8050029b3f5" satisfied condition "success or failure" May 14 13:28:26.386: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-50a2fd08-2ac5-430e-a920-f8050029b3f5 container secret-volume-test: STEP: delete the pod May 14 13:28:26.459: INFO: Waiting for pod pod-projected-secrets-50a2fd08-2ac5-430e-a920-f8050029b3f5 to disappear May 14 13:28:26.467: INFO: Pod pod-projected-secrets-50a2fd08-2ac5-430e-a920-f8050029b3f5 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:28:26.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4963" for this suite. May 14 13:28:32.506: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:28:32.568: INFO: namespace projected-4963 deletion completed in 6.098128747s • [SLOW TEST:10.322 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:28:32.569: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC May 14 13:28:32.647: INFO: namespace kubectl-9044 May 14 13:28:32.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9044' May 14 13:28:32.970: INFO: stderr: "" May 14 13:28:32.970: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. May 14 13:28:33.975: INFO: Selector matched 1 pods for map[app:redis] May 14 13:28:33.975: INFO: Found 0 / 1 May 14 13:28:35.074: INFO: Selector matched 1 pods for map[app:redis] May 14 13:28:35.075: INFO: Found 0 / 1 May 14 13:28:35.975: INFO: Selector matched 1 pods for map[app:redis] May 14 13:28:35.975: INFO: Found 0 / 1 May 14 13:28:36.974: INFO: Selector matched 1 pods for map[app:redis] May 14 13:28:36.974: INFO: Found 1 / 1 May 14 13:28:36.974: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 14 13:28:36.976: INFO: Selector matched 1 pods for map[app:redis] May 14 13:28:36.976: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 14 13:28:36.976: INFO: wait on redis-master startup in kubectl-9044 May 14 13:28:36.976: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-kp8qc redis-master --namespace=kubectl-9044' May 14 13:28:37.087: INFO: stderr: "" May 14 13:28:37.087: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 14 May 13:28:36.324 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 14 May 13:28:36.324 # Server started, Redis version 3.2.12\n1:M 14 May 13:28:36.324 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 14 May 13:28:36.324 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC May 14 13:28:37.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-9044' May 14 13:28:37.313: INFO: stderr: "" May 14 13:28:37.313: INFO: stdout: "service/rm2 exposed\n" May 14 13:28:37.317: INFO: Service rm2 in namespace kubectl-9044 found. STEP: exposing service May 14 13:28:39.322: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-9044' May 14 13:28:39.493: INFO: stderr: "" May 14 13:28:39.493: INFO: stdout: "service/rm3 exposed\n" May 14 13:28:39.497: INFO: Service rm3 in namespace kubectl-9044 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:28:41.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9044" for this suite. May 14 13:29:03.551: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:29:03.629: INFO: namespace kubectl-9044 deletion completed in 22.12127456s • [SLOW TEST:31.061 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:29:03.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:29:38.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8706" for this suite. May 14 13:29:44.600: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:29:44.673: INFO: namespace container-runtime-8706 deletion completed in 6.10181903s • [SLOW TEST:41.044 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:29:44.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292 STEP: creating an rc May 14 13:29:44.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-233' May 14 13:29:44.967: INFO: stderr: "" May 14 13:29:44.967: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Waiting for Redis master to start. May 14 13:29:45.970: INFO: Selector matched 1 pods for map[app:redis] May 14 13:29:45.971: INFO: Found 0 / 1 May 14 13:29:46.971: INFO: Selector matched 1 pods for map[app:redis] May 14 13:29:46.971: INFO: Found 0 / 1 May 14 13:29:47.972: INFO: Selector matched 1 pods for map[app:redis] May 14 13:29:47.972: INFO: Found 0 / 1 May 14 13:29:48.971: INFO: Selector matched 1 pods for map[app:redis] May 14 13:29:48.971: INFO: Found 1 / 1 May 14 13:29:48.971: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 14 13:29:48.974: INFO: Selector matched 1 pods for map[app:redis] May 14 13:29:48.974: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings May 14 13:29:48.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-dt4kw redis-master --namespace=kubectl-233' May 14 13:29:49.100: INFO: stderr: "" May 14 13:29:49.100: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 14 May 13:29:48.174 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 14 May 13:29:48.174 # Server started, Redis version 3.2.12\n1:M 14 May 13:29:48.174 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 14 May 13:29:48.174 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines May 14 13:29:49.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-dt4kw redis-master --namespace=kubectl-233 --tail=1' May 14 13:29:49.198: INFO: stderr: "" May 14 13:29:49.198: INFO: stdout: "1:M 14 May 13:29:48.174 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes May 14 13:29:49.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-dt4kw redis-master --namespace=kubectl-233 --limit-bytes=1' May 14 13:29:49.308: INFO: stderr: "" May 14 13:29:49.308: INFO: stdout: " " STEP: exposing timestamps May 14 13:29:49.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-dt4kw redis-master --namespace=kubectl-233 --tail=1 --timestamps' May 14 13:29:49.424: INFO: stderr: "" May 14 13:29:49.424: INFO: stdout: "2020-05-14T13:29:48.174940763Z 1:M 14 May 13:29:48.174 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range May 14 13:29:51.925: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-dt4kw redis-master --namespace=kubectl-233 --since=1s' May 14 13:29:52.038: INFO: stderr: "" May 14 13:29:52.038: INFO: stdout: "" May 14 13:29:52.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-dt4kw redis-master --namespace=kubectl-233 --since=24h' May 14 13:29:52.159: INFO: stderr: "" May 14 13:29:52.159: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 14 May 13:29:48.174 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 14 May 13:29:48.174 # Server started, Redis version 3.2.12\n1:M 14 May 13:29:48.174 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 14 May 13:29:48.174 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 STEP: using delete to clean up resources May 14 13:29:52.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-233' May 14 13:29:52.263: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 14 13:29:52.263: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" May 14 13:29:52.264: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-233' May 14 13:29:52.363: INFO: stderr: "No resources found.\n" May 14 13:29:52.363: INFO: stdout: "" May 14 13:29:52.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-233 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 14 13:29:52.463: INFO: stderr: "" May 14 13:29:52.463: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:29:52.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-233" for this suite. May 14 13:30:14.514: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:30:14.589: INFO: namespace kubectl-233 deletion completed in 22.09360643s • [SLOW TEST:29.915 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:30:14.590: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-b984446a-5b6f-4b14-9195-ce0a19ea4606 STEP: Creating a pod to test consume secrets May 14 13:30:14.694: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-037dde04-280c-4039-9911-87d682585698" in namespace "projected-6486" to be "success or failure" May 14 13:30:14.700: INFO: Pod "pod-projected-secrets-037dde04-280c-4039-9911-87d682585698": Phase="Pending", Reason="", readiness=false. Elapsed: 5.494128ms May 14 13:30:16.704: INFO: Pod "pod-projected-secrets-037dde04-280c-4039-9911-87d682585698": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010147455s May 14 13:30:18.708: INFO: Pod "pod-projected-secrets-037dde04-280c-4039-9911-87d682585698": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013941566s STEP: Saw pod success May 14 13:30:18.708: INFO: Pod "pod-projected-secrets-037dde04-280c-4039-9911-87d682585698" satisfied condition "success or failure" May 14 13:30:18.711: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-037dde04-280c-4039-9911-87d682585698 container projected-secret-volume-test: STEP: delete the pod May 14 13:30:18.738: INFO: Waiting for pod pod-projected-secrets-037dde04-280c-4039-9911-87d682585698 to disappear May 14 13:30:18.754: INFO: Pod pod-projected-secrets-037dde04-280c-4039-9911-87d682585698 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:30:18.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6486" for this suite. May 14 13:30:24.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:30:24.854: INFO: namespace projected-6486 deletion completed in 6.095351962s • [SLOW TEST:10.264 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:30:24.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:30:28.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5339" for this suite. May 14 13:30:34.991: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:30:35.049: INFO: namespace kubelet-test-5339 deletion completed in 6.077409293s • [SLOW TEST:10.194 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:30:35.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 14 13:30:35.124: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8207b9e5-ddae-4bea-9c2f-549ec07e211f" in namespace "downward-api-3429" to be "success or failure" May 14 13:30:35.153: INFO: Pod "downwardapi-volume-8207b9e5-ddae-4bea-9c2f-549ec07e211f": Phase="Pending", Reason="", readiness=false. Elapsed: 28.675473ms May 14 13:30:37.157: INFO: Pod "downwardapi-volume-8207b9e5-ddae-4bea-9c2f-549ec07e211f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033473704s May 14 13:30:39.161: INFO: Pod "downwardapi-volume-8207b9e5-ddae-4bea-9c2f-549ec07e211f": Phase="Running", Reason="", readiness=true. Elapsed: 4.037152962s May 14 13:30:41.166: INFO: Pod "downwardapi-volume-8207b9e5-ddae-4bea-9c2f-549ec07e211f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.041770472s STEP: Saw pod success May 14 13:30:41.166: INFO: Pod "downwardapi-volume-8207b9e5-ddae-4bea-9c2f-549ec07e211f" satisfied condition "success or failure" May 14 13:30:41.169: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-8207b9e5-ddae-4bea-9c2f-549ec07e211f container client-container: STEP: delete the pod May 14 13:30:41.204: INFO: Waiting for pod downwardapi-volume-8207b9e5-ddae-4bea-9c2f-549ec07e211f to disappear May 14 13:30:41.212: INFO: Pod downwardapi-volume-8207b9e5-ddae-4bea-9c2f-549ec07e211f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:30:41.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3429" for this suite. May 14 13:30:47.245: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:30:47.307: INFO: namespace downward-api-3429 deletion completed in 6.092037703s • [SLOW TEST:12.259 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:30:47.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars May 14 13:30:47.655: INFO: Waiting up to 5m0s for pod "downward-api-41f278db-eb98-4833-9b15-fd7cfbaa5e19" in namespace "downward-api-5286" to be "success or failure" May 14 13:30:47.688: INFO: Pod "downward-api-41f278db-eb98-4833-9b15-fd7cfbaa5e19": Phase="Pending", Reason="", readiness=false. Elapsed: 32.979247ms May 14 13:30:49.691: INFO: Pod "downward-api-41f278db-eb98-4833-9b15-fd7cfbaa5e19": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036262194s May 14 13:30:51.695: INFO: Pod "downward-api-41f278db-eb98-4833-9b15-fd7cfbaa5e19": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040157104s May 14 13:30:53.699: INFO: Pod "downward-api-41f278db-eb98-4833-9b15-fd7cfbaa5e19": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.044872138s STEP: Saw pod success May 14 13:30:53.700: INFO: Pod "downward-api-41f278db-eb98-4833-9b15-fd7cfbaa5e19" satisfied condition "success or failure" May 14 13:30:53.703: INFO: Trying to get logs from node iruya-worker2 pod downward-api-41f278db-eb98-4833-9b15-fd7cfbaa5e19 container dapi-container: STEP: delete the pod May 14 13:30:53.731: INFO: Waiting for pod downward-api-41f278db-eb98-4833-9b15-fd7cfbaa5e19 to disappear May 14 13:30:53.749: INFO: Pod downward-api-41f278db-eb98-4833-9b15-fd7cfbaa5e19 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:30:53.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5286" for this suite. May 14 13:30:59.797: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:31:00.102: INFO: namespace downward-api-5286 deletion completed in 6.348741076s • [SLOW TEST:12.794 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:31:00.102: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 14 13:31:00.212: INFO: Waiting up to 5m0s for pod "downwardapi-volume-91a632a9-a695-4bd1-9426-aa37e4236de4" in namespace "projected-2105" to be "success or failure" May 14 13:31:00.253: INFO: Pod "downwardapi-volume-91a632a9-a695-4bd1-9426-aa37e4236de4": Phase="Pending", Reason="", readiness=false. Elapsed: 41.214931ms May 14 13:31:02.258: INFO: Pod "downwardapi-volume-91a632a9-a695-4bd1-9426-aa37e4236de4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045829087s May 14 13:31:04.263: INFO: Pod "downwardapi-volume-91a632a9-a695-4bd1-9426-aa37e4236de4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050609534s STEP: Saw pod success May 14 13:31:04.263: INFO: Pod "downwardapi-volume-91a632a9-a695-4bd1-9426-aa37e4236de4" satisfied condition "success or failure" May 14 13:31:04.266: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-91a632a9-a695-4bd1-9426-aa37e4236de4 container client-container: STEP: delete the pod May 14 13:31:04.304: INFO: Waiting for pod downwardapi-volume-91a632a9-a695-4bd1-9426-aa37e4236de4 to disappear May 14 13:31:04.324: INFO: Pod downwardapi-volume-91a632a9-a695-4bd1-9426-aa37e4236de4 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:31:04.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2105" for this suite. May 14 13:31:10.340: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:31:10.404: INFO: namespace projected-2105 deletion completed in 6.076258711s • [SLOW TEST:10.302 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:31:10.405: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 14 13:31:15.556: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:31:16.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-2077" for this suite. May 14 13:31:38.638: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:31:38.712: INFO: namespace replicaset-2077 deletion completed in 22.131163637s • [SLOW TEST:28.307 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:31:38.712: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 14 13:31:38.882: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 14 13:31:43.886: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 14 13:31:43.886: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 May 14 13:31:47.952: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-8781,SelfLink:/apis/apps/v1/namespaces/deployment-8781/deployments/test-cleanup-deployment,UID:aa030538-ea5e-4cb3-a7a9-95380213434e,ResourceVersion:10859140,Generation:1,CreationTimestamp:2020-05-14 13:31:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 1,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-05-14 13:31:43 +0000 UTC 2020-05-14 13:31:43 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-05-14 13:31:47 +0000 UTC 2020-05-14 13:31:43 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-cleanup-deployment-55bbcbc84c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} May 14 13:31:47.954: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-8781,SelfLink:/apis/apps/v1/namespaces/deployment-8781/replicasets/test-cleanup-deployment-55bbcbc84c,UID:d2718f71-f348-466d-8366-d67c2c1ee56f,ResourceVersion:10859129,Generation:1,CreationTimestamp:2020-05-14 13:31:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment aa030538-ea5e-4cb3-a7a9-95380213434e 0xc00290cbc7 0xc00290cbc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 14 13:31:47.957: INFO: Pod "test-cleanup-deployment-55bbcbc84c-4x8mw" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-4x8mw,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-8781,SelfLink:/api/v1/namespaces/deployment-8781/pods/test-cleanup-deployment-55bbcbc84c-4x8mw,UID:b0b81564-c0a0-4f8b-848c-68bfb2140edf,ResourceVersion:10859128,Generation:0,CreationTimestamp:2020-05-14 13:31:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c d2718f71-f348-466d-8366-d67c2c1ee56f 0xc002b76d37 0xc002b76d38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-n8hvg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n8hvg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-n8hvg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b76dc0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b76de0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 13:31:44 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 13:31:47 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 13:31:47 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 13:31:43 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.68,StartTime:2020-05-14 13:31:44 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-05-14 13:31:47 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://e43ab7a177974710d75c540ddd0c95e69705ae713905ec728dd83402982ba0a5}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:31:47.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8781" for this suite. May 14 13:31:54.112: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:31:54.186: INFO: namespace deployment-8781 deletion completed in 6.227037988s • [SLOW TEST:15.474 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:31:54.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 14 13:31:54.281: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a2dcd167-2143-47b7-a08a-a54dc55f583f" in namespace "projected-9389" to be "success or failure" May 14 13:31:54.288: INFO: Pod "downwardapi-volume-a2dcd167-2143-47b7-a08a-a54dc55f583f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.12136ms May 14 13:31:56.292: INFO: Pod "downwardapi-volume-a2dcd167-2143-47b7-a08a-a54dc55f583f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011027679s May 14 13:31:58.295: INFO: Pod "downwardapi-volume-a2dcd167-2143-47b7-a08a-a54dc55f583f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014524657s STEP: Saw pod success May 14 13:31:58.295: INFO: Pod "downwardapi-volume-a2dcd167-2143-47b7-a08a-a54dc55f583f" satisfied condition "success or failure" May 14 13:31:58.298: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-a2dcd167-2143-47b7-a08a-a54dc55f583f container client-container: STEP: delete the pod May 14 13:31:58.486: INFO: Waiting for pod downwardapi-volume-a2dcd167-2143-47b7-a08a-a54dc55f583f to disappear May 14 13:31:58.513: INFO: Pod downwardapi-volume-a2dcd167-2143-47b7-a08a-a54dc55f583f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:31:58.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9389" for this suite. May 14 13:32:04.567: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:32:04.664: INFO: namespace projected-9389 deletion completed in 6.147234968s • [SLOW TEST:10.478 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:32:04.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 14 13:32:04.706: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-8818' May 14 13:32:04.824: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 14 13:32:04.824: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617 May 14 13:32:04.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-8818' May 14 13:32:04.952: INFO: stderr: "" May 14 13:32:04.952: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:32:04.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8818" for this suite. May 14 13:32:10.970: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:32:11.178: INFO: namespace kubectl-8818 deletion completed in 6.222869215s • [SLOW TEST:6.513 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:32:11.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-pjhm STEP: Creating a pod to test atomic-volume-subpath May 14 13:32:11.534: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-pjhm" in namespace "subpath-9733" to be "success or failure" May 14 13:32:11.547: INFO: Pod "pod-subpath-test-configmap-pjhm": Phase="Pending", Reason="", readiness=false. Elapsed: 12.804411ms May 14 13:32:13.550: INFO: Pod "pod-subpath-test-configmap-pjhm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01571169s May 14 13:32:15.555: INFO: Pod "pod-subpath-test-configmap-pjhm": Phase="Running", Reason="", readiness=true. Elapsed: 4.02015168s May 14 13:32:17.558: INFO: Pod "pod-subpath-test-configmap-pjhm": Phase="Running", Reason="", readiness=true. Elapsed: 6.02354508s May 14 13:32:19.563: INFO: Pod "pod-subpath-test-configmap-pjhm": Phase="Running", Reason="", readiness=true. Elapsed: 8.02809988s May 14 13:32:21.567: INFO: Pod "pod-subpath-test-configmap-pjhm": Phase="Running", Reason="", readiness=true. Elapsed: 10.032224902s May 14 13:32:23.570: INFO: Pod "pod-subpath-test-configmap-pjhm": Phase="Running", Reason="", readiness=true. Elapsed: 12.035825736s May 14 13:32:25.574: INFO: Pod "pod-subpath-test-configmap-pjhm": Phase="Running", Reason="", readiness=true. Elapsed: 14.039196838s May 14 13:32:27.578: INFO: Pod "pod-subpath-test-configmap-pjhm": Phase="Running", Reason="", readiness=true. Elapsed: 16.043404864s May 14 13:32:29.583: INFO: Pod "pod-subpath-test-configmap-pjhm": Phase="Running", Reason="", readiness=true. Elapsed: 18.04815892s May 14 13:32:31.587: INFO: Pod "pod-subpath-test-configmap-pjhm": Phase="Running", Reason="", readiness=true. Elapsed: 20.052058983s May 14 13:32:33.591: INFO: Pod "pod-subpath-test-configmap-pjhm": Phase="Running", Reason="", readiness=true. Elapsed: 22.056498875s May 14 13:32:35.595: INFO: Pod "pod-subpath-test-configmap-pjhm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.060733349s STEP: Saw pod success May 14 13:32:35.595: INFO: Pod "pod-subpath-test-configmap-pjhm" satisfied condition "success or failure" May 14 13:32:35.599: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-configmap-pjhm container test-container-subpath-configmap-pjhm: STEP: delete the pod May 14 13:32:35.664: INFO: Waiting for pod pod-subpath-test-configmap-pjhm to disappear May 14 13:32:35.747: INFO: Pod pod-subpath-test-configmap-pjhm no longer exists STEP: Deleting pod pod-subpath-test-configmap-pjhm May 14 13:32:35.747: INFO: Deleting pod "pod-subpath-test-configmap-pjhm" in namespace "subpath-9733" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:32:35.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9733" for this suite. May 14 13:32:41.768: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:32:41.842: INFO: namespace subpath-9733 deletion completed in 6.087416034s • [SLOW TEST:30.664 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:32:41.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 14 13:32:41.910: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' May 14 13:32:42.056: INFO: stderr: "" May 14 13:32:42.056: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.11\", GitCommit:\"d94a81c724ea8e1ccc9002d89b7fe81d58f89ede\", GitTreeState:\"clean\", BuildDate:\"2020-05-02T15:37:43Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T00:28:37Z\", GoVersion:\"go1.12.12\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:32:42.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2454" for this suite. May 14 13:32:48.074: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:32:48.139: INFO: namespace kubectl-2454 deletion completed in 6.07882032s • [SLOW TEST:6.296 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:32:48.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:33:14.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-1977" for this suite. May 14 13:33:20.409: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:33:20.485: INFO: namespace namespaces-1977 deletion completed in 6.096508564s STEP: Destroying namespace "nsdeletetest-4577" for this suite. May 14 13:33:20.487: INFO: Namespace nsdeletetest-4577 was already deleted STEP: Destroying namespace "nsdeletetest-3619" for this suite. May 14 13:33:26.504: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:33:26.563: INFO: namespace nsdeletetest-3619 deletion completed in 6.076161866s • [SLOW TEST:38.424 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:33:26.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium May 14 13:33:26.936: INFO: Waiting up to 5m0s for pod "pod-6db65ab0-a3a7-4f94-8f6f-36735aaff2b5" in namespace "emptydir-7379" to be "success or failure" May 14 13:33:26.981: INFO: Pod "pod-6db65ab0-a3a7-4f94-8f6f-36735aaff2b5": Phase="Pending", Reason="", readiness=false. Elapsed: 44.45979ms May 14 13:33:29.096: INFO: Pod "pod-6db65ab0-a3a7-4f94-8f6f-36735aaff2b5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.15958688s May 14 13:33:31.114: INFO: Pod "pod-6db65ab0-a3a7-4f94-8f6f-36735aaff2b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.177652169s STEP: Saw pod success May 14 13:33:31.114: INFO: Pod "pod-6db65ab0-a3a7-4f94-8f6f-36735aaff2b5" satisfied condition "success or failure" May 14 13:33:31.116: INFO: Trying to get logs from node iruya-worker2 pod pod-6db65ab0-a3a7-4f94-8f6f-36735aaff2b5 container test-container: STEP: delete the pod May 14 13:33:31.137: INFO: Waiting for pod pod-6db65ab0-a3a7-4f94-8f6f-36735aaff2b5 to disappear May 14 13:33:31.176: INFO: Pod pod-6db65ab0-a3a7-4f94-8f6f-36735aaff2b5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:33:31.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7379" for this suite. May 14 13:33:37.416: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:33:37.502: INFO: namespace emptydir-7379 deletion completed in 6.322849421s • [SLOW TEST:10.938 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:33:37.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-3968 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet May 14 13:33:37.623: INFO: Found 0 stateful pods, waiting for 3 May 14 13:33:47.635: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 14 13:33:47.635: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 14 13:33:47.635: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 14 13:33:57.626: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 14 13:33:57.626: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 14 13:33:57.626: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 14 13:33:57.635: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3968 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 14 13:33:57.877: INFO: stderr: "I0514 13:33:57.765004 1437 log.go:172] (0xc0009a0420) (0xc000a06820) Create stream\nI0514 13:33:57.765067 1437 log.go:172] (0xc0009a0420) (0xc000a06820) Stream added, broadcasting: 1\nI0514 13:33:57.767446 1437 log.go:172] (0xc0009a0420) Reply frame received for 1\nI0514 13:33:57.767497 1437 log.go:172] (0xc0009a0420) (0xc0006fe140) Create stream\nI0514 13:33:57.767521 1437 log.go:172] (0xc0009a0420) (0xc0006fe140) Stream added, broadcasting: 3\nI0514 13:33:57.768521 1437 log.go:172] (0xc0009a0420) Reply frame received for 3\nI0514 13:33:57.768558 1437 log.go:172] (0xc0009a0420) (0xc000a068c0) Create stream\nI0514 13:33:57.768571 1437 log.go:172] (0xc0009a0420) (0xc000a068c0) Stream added, broadcasting: 5\nI0514 13:33:57.769822 1437 log.go:172] (0xc0009a0420) Reply frame received for 5\nI0514 13:33:57.835944 1437 log.go:172] (0xc0009a0420) Data frame received for 5\nI0514 13:33:57.835969 1437 log.go:172] (0xc000a068c0) (5) Data frame handling\nI0514 13:33:57.835981 1437 log.go:172] (0xc000a068c0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0514 13:33:57.868774 1437 log.go:172] (0xc0009a0420) Data frame received for 5\nI0514 13:33:57.868833 1437 log.go:172] (0xc000a068c0) (5) Data frame handling\nI0514 13:33:57.868862 1437 log.go:172] (0xc0009a0420) Data frame received for 3\nI0514 13:33:57.868954 1437 log.go:172] (0xc0006fe140) (3) Data frame handling\nI0514 13:33:57.868989 1437 log.go:172] (0xc0006fe140) (3) Data frame sent\nI0514 13:33:57.869005 1437 log.go:172] (0xc0009a0420) Data frame received for 3\nI0514 13:33:57.869037 1437 log.go:172] (0xc0006fe140) (3) Data frame handling\nI0514 13:33:57.871676 1437 log.go:172] (0xc0009a0420) Data frame received for 1\nI0514 13:33:57.871700 1437 log.go:172] (0xc000a06820) (1) Data frame handling\nI0514 13:33:57.871720 1437 log.go:172] (0xc000a06820) (1) Data frame sent\nI0514 13:33:57.871764 1437 log.go:172] (0xc0009a0420) (0xc000a06820) Stream removed, broadcasting: 1\nI0514 13:33:57.871792 1437 log.go:172] (0xc0009a0420) Go away received\nI0514 13:33:57.872360 1437 log.go:172] (0xc0009a0420) (0xc000a06820) Stream removed, broadcasting: 1\nI0514 13:33:57.872399 1437 log.go:172] (0xc0009a0420) (0xc0006fe140) Stream removed, broadcasting: 3\nI0514 13:33:57.872418 1437 log.go:172] (0xc0009a0420) (0xc000a068c0) Stream removed, broadcasting: 5\n" May 14 13:33:57.877: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 14 13:33:57.877: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine May 14 13:34:07.919: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 14 13:34:17.946: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3968 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 14 13:34:21.028: INFO: stderr: "I0514 13:34:20.935256 1457 log.go:172] (0xc0008560b0) (0xc0005bc820) Create stream\nI0514 13:34:20.935298 1457 log.go:172] (0xc0008560b0) (0xc0005bc820) Stream added, broadcasting: 1\nI0514 13:34:20.937756 1457 log.go:172] (0xc0008560b0) Reply frame received for 1\nI0514 13:34:20.937817 1457 log.go:172] (0xc0008560b0) (0xc0005bc8c0) Create stream\nI0514 13:34:20.937838 1457 log.go:172] (0xc0008560b0) (0xc0005bc8c0) Stream added, broadcasting: 3\nI0514 13:34:20.938879 1457 log.go:172] (0xc0008560b0) Reply frame received for 3\nI0514 13:34:20.938911 1457 log.go:172] (0xc0008560b0) (0xc000282000) Create stream\nI0514 13:34:20.938918 1457 log.go:172] (0xc0008560b0) (0xc000282000) Stream added, broadcasting: 5\nI0514 13:34:20.939722 1457 log.go:172] (0xc0008560b0) Reply frame received for 5\nI0514 13:34:21.021802 1457 log.go:172] (0xc0008560b0) Data frame received for 5\nI0514 13:34:21.021854 1457 log.go:172] (0xc000282000) (5) Data frame handling\nI0514 13:34:21.021872 1457 log.go:172] (0xc000282000) (5) Data frame sent\nI0514 13:34:21.021883 1457 log.go:172] (0xc0008560b0) Data frame received for 5\nI0514 13:34:21.021892 1457 log.go:172] (0xc000282000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0514 13:34:21.021919 1457 log.go:172] (0xc0008560b0) Data frame received for 3\nI0514 13:34:21.021929 1457 log.go:172] (0xc0005bc8c0) (3) Data frame handling\nI0514 13:34:21.021946 1457 log.go:172] (0xc0005bc8c0) (3) Data frame sent\nI0514 13:34:21.021955 1457 log.go:172] (0xc0008560b0) Data frame received for 3\nI0514 13:34:21.021964 1457 log.go:172] (0xc0005bc8c0) (3) Data frame handling\nI0514 13:34:21.023381 1457 log.go:172] (0xc0008560b0) Data frame received for 1\nI0514 13:34:21.023412 1457 log.go:172] (0xc0005bc820) (1) Data frame handling\nI0514 13:34:21.023437 1457 log.go:172] (0xc0005bc820) (1) Data frame sent\nI0514 13:34:21.023457 1457 log.go:172] (0xc0008560b0) (0xc0005bc820) Stream removed, broadcasting: 1\nI0514 13:34:21.023483 1457 log.go:172] (0xc0008560b0) Go away received\nI0514 13:34:21.023995 1457 log.go:172] (0xc0008560b0) (0xc0005bc820) Stream removed, broadcasting: 1\nI0514 13:34:21.024017 1457 log.go:172] (0xc0008560b0) (0xc0005bc8c0) Stream removed, broadcasting: 3\nI0514 13:34:21.024026 1457 log.go:172] (0xc0008560b0) (0xc000282000) Stream removed, broadcasting: 5\n" May 14 13:34:21.028: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 14 13:34:21.028: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 14 13:34:51.129: INFO: Waiting for StatefulSet statefulset-3968/ss2 to complete update STEP: Rolling back to a previous revision May 14 13:35:01.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3968 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 14 13:35:01.424: INFO: stderr: "I0514 13:35:01.272105 1489 log.go:172] (0xc000118630) (0xc0007a0b40) Create stream\nI0514 13:35:01.272184 1489 log.go:172] (0xc000118630) (0xc0007a0b40) Stream added, broadcasting: 1\nI0514 13:35:01.274563 1489 log.go:172] (0xc000118630) Reply frame received for 1\nI0514 13:35:01.274596 1489 log.go:172] (0xc000118630) (0xc0007ece60) Create stream\nI0514 13:35:01.274607 1489 log.go:172] (0xc000118630) (0xc0007ece60) Stream added, broadcasting: 3\nI0514 13:35:01.275594 1489 log.go:172] (0xc000118630) Reply frame received for 3\nI0514 13:35:01.275645 1489 log.go:172] (0xc000118630) (0xc0007a0be0) Create stream\nI0514 13:35:01.275661 1489 log.go:172] (0xc000118630) (0xc0007a0be0) Stream added, broadcasting: 5\nI0514 13:35:01.276405 1489 log.go:172] (0xc000118630) Reply frame received for 5\nI0514 13:35:01.355517 1489 log.go:172] (0xc000118630) Data frame received for 5\nI0514 13:35:01.355536 1489 log.go:172] (0xc0007a0be0) (5) Data frame handling\nI0514 13:35:01.355543 1489 log.go:172] (0xc0007a0be0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0514 13:35:01.416326 1489 log.go:172] (0xc000118630) Data frame received for 3\nI0514 13:35:01.416367 1489 log.go:172] (0xc0007ece60) (3) Data frame handling\nI0514 13:35:01.416393 1489 log.go:172] (0xc0007ece60) (3) Data frame sent\nI0514 13:35:01.416469 1489 log.go:172] (0xc000118630) Data frame received for 3\nI0514 13:35:01.416501 1489 log.go:172] (0xc0007ece60) (3) Data frame handling\nI0514 13:35:01.416633 1489 log.go:172] (0xc000118630) Data frame received for 5\nI0514 13:35:01.416656 1489 log.go:172] (0xc0007a0be0) (5) Data frame handling\nI0514 13:35:01.418673 1489 log.go:172] (0xc000118630) Data frame received for 1\nI0514 13:35:01.418718 1489 log.go:172] (0xc0007a0b40) (1) Data frame handling\nI0514 13:35:01.418730 1489 log.go:172] (0xc0007a0b40) (1) Data frame sent\nI0514 13:35:01.418744 1489 log.go:172] (0xc000118630) (0xc0007a0b40) Stream removed, broadcasting: 1\nI0514 13:35:01.418759 1489 log.go:172] (0xc000118630) Go away received\nI0514 13:35:01.419246 1489 log.go:172] (0xc000118630) (0xc0007a0b40) Stream removed, broadcasting: 1\nI0514 13:35:01.419272 1489 log.go:172] (0xc000118630) (0xc0007ece60) Stream removed, broadcasting: 3\nI0514 13:35:01.419284 1489 log.go:172] (0xc000118630) (0xc0007a0be0) Stream removed, broadcasting: 5\n" May 14 13:35:01.424: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 14 13:35:01.424: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 14 13:35:11.458: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 14 13:35:21.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3968 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 14 13:35:21.701: INFO: stderr: "I0514 13:35:21.610685 1509 log.go:172] (0xc000652420) (0xc0006ae640) Create stream\nI0514 13:35:21.610734 1509 log.go:172] (0xc000652420) (0xc0006ae640) Stream added, broadcasting: 1\nI0514 13:35:21.613353 1509 log.go:172] (0xc000652420) Reply frame received for 1\nI0514 13:35:21.613397 1509 log.go:172] (0xc000652420) (0xc0007f0000) Create stream\nI0514 13:35:21.613423 1509 log.go:172] (0xc000652420) (0xc0007f0000) Stream added, broadcasting: 3\nI0514 13:35:21.614581 1509 log.go:172] (0xc000652420) Reply frame received for 3\nI0514 13:35:21.614622 1509 log.go:172] (0xc000652420) (0xc0006ae6e0) Create stream\nI0514 13:35:21.614638 1509 log.go:172] (0xc000652420) (0xc0006ae6e0) Stream added, broadcasting: 5\nI0514 13:35:21.615654 1509 log.go:172] (0xc000652420) Reply frame received for 5\nI0514 13:35:21.694365 1509 log.go:172] (0xc000652420) Data frame received for 5\nI0514 13:35:21.694400 1509 log.go:172] (0xc0006ae6e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0514 13:35:21.694430 1509 log.go:172] (0xc000652420) Data frame received for 3\nI0514 13:35:21.694476 1509 log.go:172] (0xc0007f0000) (3) Data frame handling\nI0514 13:35:21.694496 1509 log.go:172] (0xc0007f0000) (3) Data frame sent\nI0514 13:35:21.694513 1509 log.go:172] (0xc000652420) Data frame received for 3\nI0514 13:35:21.694527 1509 log.go:172] (0xc0007f0000) (3) Data frame handling\nI0514 13:35:21.694588 1509 log.go:172] (0xc0006ae6e0) (5) Data frame sent\nI0514 13:35:21.694714 1509 log.go:172] (0xc000652420) Data frame received for 5\nI0514 13:35:21.694748 1509 log.go:172] (0xc0006ae6e0) (5) Data frame handling\nI0514 13:35:21.696256 1509 log.go:172] (0xc000652420) Data frame received for 1\nI0514 13:35:21.696289 1509 log.go:172] (0xc0006ae640) (1) Data frame handling\nI0514 13:35:21.696318 1509 log.go:172] (0xc0006ae640) (1) Data frame sent\nI0514 13:35:21.696341 1509 log.go:172] (0xc000652420) (0xc0006ae640) Stream removed, broadcasting: 1\nI0514 13:35:21.696793 1509 log.go:172] (0xc000652420) (0xc0006ae640) Stream removed, broadcasting: 1\nI0514 13:35:21.696824 1509 log.go:172] (0xc000652420) (0xc0007f0000) Stream removed, broadcasting: 3\nI0514 13:35:21.697036 1509 log.go:172] (0xc000652420) (0xc0006ae6e0) Stream removed, broadcasting: 5\nI0514 13:35:21.697103 1509 log.go:172] (0xc000652420) Go away received\n" May 14 13:35:21.701: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 14 13:35:21.701: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 14 13:35:31.718: INFO: Waiting for StatefulSet statefulset-3968/ss2 to complete update May 14 13:35:31.718: INFO: Waiting for Pod statefulset-3968/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd May 14 13:35:31.718: INFO: Waiting for Pod statefulset-3968/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd May 14 13:35:31.718: INFO: Waiting for Pod statefulset-3968/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd May 14 13:35:41.724: INFO: Waiting for StatefulSet statefulset-3968/ss2 to complete update May 14 13:35:41.724: INFO: Waiting for Pod statefulset-3968/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd May 14 13:35:51.724: INFO: Waiting for StatefulSet statefulset-3968/ss2 to complete update May 14 13:35:51.724: INFO: Waiting for Pod statefulset-3968/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 May 14 13:36:01.725: INFO: Deleting all statefulset in ns statefulset-3968 May 14 13:36:01.729: INFO: Scaling statefulset ss2 to 0 May 14 13:36:21.749: INFO: Waiting for statefulset status.replicas updated to 0 May 14 13:36:21.752: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:36:21.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3968" for this suite. May 14 13:36:27.783: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:36:27.862: INFO: namespace statefulset-3968 deletion completed in 6.088315441s • [SLOW TEST:170.360 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:36:27.863: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-01e383ed-fc61-4ab2-81e8-f45360129efe STEP: Creating a pod to test consume configMaps May 14 13:36:27.980: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1bdc495d-4c69-4e49-b655-70537092a8e7" in namespace "projected-6323" to be "success or failure" May 14 13:36:27.984: INFO: Pod "pod-projected-configmaps-1bdc495d-4c69-4e49-b655-70537092a8e7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.213403ms May 14 13:36:29.988: INFO: Pod "pod-projected-configmaps-1bdc495d-4c69-4e49-b655-70537092a8e7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00772332s May 14 13:36:32.058: INFO: Pod "pod-projected-configmaps-1bdc495d-4c69-4e49-b655-70537092a8e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.077713506s STEP: Saw pod success May 14 13:36:32.058: INFO: Pod "pod-projected-configmaps-1bdc495d-4c69-4e49-b655-70537092a8e7" satisfied condition "success or failure" May 14 13:36:32.061: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-1bdc495d-4c69-4e49-b655-70537092a8e7 container projected-configmap-volume-test: STEP: delete the pod May 14 13:36:32.082: INFO: Waiting for pod pod-projected-configmaps-1bdc495d-4c69-4e49-b655-70537092a8e7 to disappear May 14 13:36:32.117: INFO: Pod pod-projected-configmaps-1bdc495d-4c69-4e49-b655-70537092a8e7 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:36:32.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6323" for this suite. May 14 13:36:38.270: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:36:38.342: INFO: namespace projected-6323 deletion completed in 6.221823728s • [SLOW TEST:10.479 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:36:38.342: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars May 14 13:36:38.467: INFO: Waiting up to 5m0s for pod "downward-api-d026534e-209e-4f08-862e-151f5073c27f" in namespace "downward-api-8529" to be "success or failure" May 14 13:36:38.470: INFO: Pod "downward-api-d026534e-209e-4f08-862e-151f5073c27f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.314171ms May 14 13:36:40.474: INFO: Pod "downward-api-d026534e-209e-4f08-862e-151f5073c27f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007279633s May 14 13:36:42.477: INFO: Pod "downward-api-d026534e-209e-4f08-862e-151f5073c27f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009770127s STEP: Saw pod success May 14 13:36:42.477: INFO: Pod "downward-api-d026534e-209e-4f08-862e-151f5073c27f" satisfied condition "success or failure" May 14 13:36:42.479: INFO: Trying to get logs from node iruya-worker pod downward-api-d026534e-209e-4f08-862e-151f5073c27f container dapi-container: STEP: delete the pod May 14 13:36:42.589: INFO: Waiting for pod downward-api-d026534e-209e-4f08-862e-151f5073c27f to disappear May 14 13:36:42.591: INFO: Pod downward-api-d026534e-209e-4f08-862e-151f5073c27f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:36:42.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8529" for this suite. May 14 13:36:48.611: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:36:48.690: INFO: namespace downward-api-8529 deletion completed in 6.095768493s • [SLOW TEST:10.348 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:36:48.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-942a22b4-6249-4511-ac4f-6e855710c3ff STEP: Creating a pod to test consume configMaps May 14 13:36:48.866: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b474d108-2191-4ea2-952a-1d1a31a1ef9e" in namespace "projected-9248" to be "success or failure" May 14 13:36:48.901: INFO: Pod "pod-projected-configmaps-b474d108-2191-4ea2-952a-1d1a31a1ef9e": Phase="Pending", Reason="", readiness=false. Elapsed: 35.054994ms May 14 13:36:50.906: INFO: Pod "pod-projected-configmaps-b474d108-2191-4ea2-952a-1d1a31a1ef9e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039460125s May 14 13:36:52.910: INFO: Pod "pod-projected-configmaps-b474d108-2191-4ea2-952a-1d1a31a1ef9e": Phase="Running", Reason="", readiness=true. Elapsed: 4.043390529s May 14 13:36:54.913: INFO: Pod "pod-projected-configmaps-b474d108-2191-4ea2-952a-1d1a31a1ef9e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.046392481s STEP: Saw pod success May 14 13:36:54.913: INFO: Pod "pod-projected-configmaps-b474d108-2191-4ea2-952a-1d1a31a1ef9e" satisfied condition "success or failure" May 14 13:36:54.915: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-b474d108-2191-4ea2-952a-1d1a31a1ef9e container projected-configmap-volume-test: STEP: delete the pod May 14 13:36:54.938: INFO: Waiting for pod pod-projected-configmaps-b474d108-2191-4ea2-952a-1d1a31a1ef9e to disappear May 14 13:36:54.949: INFO: Pod pod-projected-configmaps-b474d108-2191-4ea2-952a-1d1a31a1ef9e no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:36:54.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9248" for this suite. May 14 13:37:00.959: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:37:01.035: INFO: namespace projected-9248 deletion completed in 6.083926169s • [SLOW TEST:12.344 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:37:01.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 14 13:37:01.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-4502' May 14 13:37:01.274: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 14 13:37:01.274: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426 May 14 13:37:03.317: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-4502' May 14 13:37:03.412: INFO: stderr: "" May 14 13:37:03.412: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:37:03.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4502" for this suite. May 14 13:37:25.687: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:37:25.759: INFO: namespace kubectl-4502 deletion completed in 22.239959678s • [SLOW TEST:24.724 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:37:25.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-207bf4f9-dc1a-426a-9868-471bbf877fca STEP: Creating configMap with name cm-test-opt-upd-ba0c8697-5667-44b7-9c86-967545b3c2f5 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-207bf4f9-dc1a-426a-9868-471bbf877fca STEP: Updating configmap cm-test-opt-upd-ba0c8697-5667-44b7-9c86-967545b3c2f5 STEP: Creating configMap with name cm-test-opt-create-436b3264-0b9a-42af-9ebb-28fd8c1f7232 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:38:58.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8283" for this suite. May 14 13:39:22.513: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:39:22.573: INFO: namespace projected-8283 deletion completed in 24.079840454s • [SLOW TEST:116.814 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:39:22.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 14 13:39:22.626: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f72aeb79-1698-4439-af4c-6b9cbed8fd3c" in namespace "downward-api-9262" to be "success or failure" May 14 13:39:22.651: INFO: Pod "downwardapi-volume-f72aeb79-1698-4439-af4c-6b9cbed8fd3c": Phase="Pending", Reason="", readiness=false. Elapsed: 24.72679ms May 14 13:39:24.656: INFO: Pod "downwardapi-volume-f72aeb79-1698-4439-af4c-6b9cbed8fd3c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02920877s May 14 13:39:26.659: INFO: Pod "downwardapi-volume-f72aeb79-1698-4439-af4c-6b9cbed8fd3c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032846943s STEP: Saw pod success May 14 13:39:26.659: INFO: Pod "downwardapi-volume-f72aeb79-1698-4439-af4c-6b9cbed8fd3c" satisfied condition "success or failure" May 14 13:39:26.663: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-f72aeb79-1698-4439-af4c-6b9cbed8fd3c container client-container: STEP: delete the pod May 14 13:39:26.709: INFO: Waiting for pod downwardapi-volume-f72aeb79-1698-4439-af4c-6b9cbed8fd3c to disappear May 14 13:39:26.712: INFO: Pod downwardapi-volume-f72aeb79-1698-4439-af4c-6b9cbed8fd3c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:39:26.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9262" for this suite. May 14 13:39:32.745: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:39:32.853: INFO: namespace downward-api-9262 deletion completed in 6.137418428s • [SLOW TEST:10.279 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:39:32.853: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test env composition May 14 13:39:32.935: INFO: Waiting up to 5m0s for pod "var-expansion-8700246d-c05f-4713-8bee-b02b24ddb1c8" in namespace "var-expansion-509" to be "success or failure" May 14 13:39:32.947: INFO: Pod "var-expansion-8700246d-c05f-4713-8bee-b02b24ddb1c8": Phase="Pending", Reason="", readiness=false. Elapsed: 12.601903ms May 14 13:39:34.951: INFO: Pod "var-expansion-8700246d-c05f-4713-8bee-b02b24ddb1c8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016487815s May 14 13:39:36.954: INFO: Pod "var-expansion-8700246d-c05f-4713-8bee-b02b24ddb1c8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019423275s STEP: Saw pod success May 14 13:39:36.954: INFO: Pod "var-expansion-8700246d-c05f-4713-8bee-b02b24ddb1c8" satisfied condition "success or failure" May 14 13:39:36.956: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-8700246d-c05f-4713-8bee-b02b24ddb1c8 container dapi-container: STEP: delete the pod May 14 13:39:37.052: INFO: Waiting for pod var-expansion-8700246d-c05f-4713-8bee-b02b24ddb1c8 to disappear May 14 13:39:37.127: INFO: Pod var-expansion-8700246d-c05f-4713-8bee-b02b24ddb1c8 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:39:37.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-509" for this suite. May 14 13:39:43.272: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:39:43.353: INFO: namespace var-expansion-509 deletion completed in 6.222128992s • [SLOW TEST:10.500 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:39:43.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 14 13:39:43.405: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3239ed8a-0f5f-4722-a157-395e7bfda476" in namespace "downward-api-4357" to be "success or failure" May 14 13:39:43.424: INFO: Pod "downwardapi-volume-3239ed8a-0f5f-4722-a157-395e7bfda476": Phase="Pending", Reason="", readiness=false. Elapsed: 18.207878ms May 14 13:39:45.428: INFO: Pod "downwardapi-volume-3239ed8a-0f5f-4722-a157-395e7bfda476": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022577523s May 14 13:39:47.431: INFO: Pod "downwardapi-volume-3239ed8a-0f5f-4722-a157-395e7bfda476": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025809508s STEP: Saw pod success May 14 13:39:47.431: INFO: Pod "downwardapi-volume-3239ed8a-0f5f-4722-a157-395e7bfda476" satisfied condition "success or failure" May 14 13:39:47.434: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-3239ed8a-0f5f-4722-a157-395e7bfda476 container client-container: STEP: delete the pod May 14 13:39:47.450: INFO: Waiting for pod downwardapi-volume-3239ed8a-0f5f-4722-a157-395e7bfda476 to disappear May 14 13:39:47.455: INFO: Pod downwardapi-volume-3239ed8a-0f5f-4722-a157-395e7bfda476 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:39:47.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4357" for this suite. May 14 13:39:53.471: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:39:53.546: INFO: namespace downward-api-4357 deletion completed in 6.088772397s • [SLOW TEST:10.193 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:39:53.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-5114 STEP: creating a selector STEP: Creating the service pods in kubernetes May 14 13:39:53.617: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 14 13:40:19.822: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.81:8080/dial?request=hostName&protocol=http&host=10.244.1.80&port=8080&tries=1'] Namespace:pod-network-test-5114 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 14 13:40:19.822: INFO: >>> kubeConfig: /root/.kube/config I0514 13:40:19.852193 6 log.go:172] (0xc001e4eb00) (0xc00223fcc0) Create stream I0514 13:40:19.852244 6 log.go:172] (0xc001e4eb00) (0xc00223fcc0) Stream added, broadcasting: 1 I0514 13:40:19.854197 6 log.go:172] (0xc001e4eb00) Reply frame received for 1 I0514 13:40:19.854251 6 log.go:172] (0xc001e4eb00) (0xc000d125a0) Create stream I0514 13:40:19.854263 6 log.go:172] (0xc001e4eb00) (0xc000d125a0) Stream added, broadcasting: 3 I0514 13:40:19.855210 6 log.go:172] (0xc001e4eb00) Reply frame received for 3 I0514 13:40:19.855268 6 log.go:172] (0xc001e4eb00) (0xc00223fea0) Create stream I0514 13:40:19.855294 6 log.go:172] (0xc001e4eb00) (0xc00223fea0) Stream added, broadcasting: 5 I0514 13:40:19.856266 6 log.go:172] (0xc001e4eb00) Reply frame received for 5 I0514 13:40:20.052649 6 log.go:172] (0xc001e4eb00) Data frame received for 3 I0514 13:40:20.052680 6 log.go:172] (0xc000d125a0) (3) Data frame handling I0514 13:40:20.052699 6 log.go:172] (0xc000d125a0) (3) Data frame sent I0514 13:40:20.053929 6 log.go:172] (0xc001e4eb00) Data frame received for 5 I0514 13:40:20.053952 6 log.go:172] (0xc00223fea0) (5) Data frame handling I0514 13:40:20.053998 6 log.go:172] (0xc001e4eb00) Data frame received for 3 I0514 13:40:20.054013 6 log.go:172] (0xc000d125a0) (3) Data frame handling I0514 13:40:20.056850 6 log.go:172] (0xc001e4eb00) Data frame received for 1 I0514 13:40:20.056883 6 log.go:172] (0xc00223fcc0) (1) Data frame handling I0514 13:40:20.056903 6 log.go:172] (0xc00223fcc0) (1) Data frame sent I0514 13:40:20.056923 6 log.go:172] (0xc001e4eb00) (0xc00223fcc0) Stream removed, broadcasting: 1 I0514 13:40:20.057042 6 log.go:172] (0xc001e4eb00) (0xc00223fcc0) Stream removed, broadcasting: 1 I0514 13:40:20.057061 6 log.go:172] (0xc001e4eb00) (0xc000d125a0) Stream removed, broadcasting: 3 I0514 13:40:20.057079 6 log.go:172] (0xc001e4eb00) (0xc00223fea0) Stream removed, broadcasting: 5 May 14 13:40:20.057: INFO: Waiting for endpoints: map[] I0514 13:40:20.057772 6 log.go:172] (0xc001e4eb00) Go away received May 14 13:40:20.061: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.81:8080/dial?request=hostName&protocol=http&host=10.244.2.129&port=8080&tries=1'] Namespace:pod-network-test-5114 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 14 13:40:20.061: INFO: >>> kubeConfig: /root/.kube/config I0514 13:40:20.088883 6 log.go:172] (0xc002b0ac60) (0xc001e62780) Create stream I0514 13:40:20.088918 6 log.go:172] (0xc002b0ac60) (0xc001e62780) Stream added, broadcasting: 1 I0514 13:40:20.091196 6 log.go:172] (0xc002b0ac60) Reply frame received for 1 I0514 13:40:20.091222 6 log.go:172] (0xc002b0ac60) (0xc0005305a0) Create stream I0514 13:40:20.091230 6 log.go:172] (0xc002b0ac60) (0xc0005305a0) Stream added, broadcasting: 3 I0514 13:40:20.091940 6 log.go:172] (0xc002b0ac60) Reply frame received for 3 I0514 13:40:20.091967 6 log.go:172] (0xc002b0ac60) (0xc001e62820) Create stream I0514 13:40:20.091976 6 log.go:172] (0xc002b0ac60) (0xc001e62820) Stream added, broadcasting: 5 I0514 13:40:20.092702 6 log.go:172] (0xc002b0ac60) Reply frame received for 5 I0514 13:40:20.167856 6 log.go:172] (0xc002b0ac60) Data frame received for 3 I0514 13:40:20.167880 6 log.go:172] (0xc0005305a0) (3) Data frame handling I0514 13:40:20.167896 6 log.go:172] (0xc0005305a0) (3) Data frame sent I0514 13:40:20.168618 6 log.go:172] (0xc002b0ac60) Data frame received for 3 I0514 13:40:20.168648 6 log.go:172] (0xc0005305a0) (3) Data frame handling I0514 13:40:20.168719 6 log.go:172] (0xc002b0ac60) Data frame received for 5 I0514 13:40:20.168763 6 log.go:172] (0xc001e62820) (5) Data frame handling I0514 13:40:20.170742 6 log.go:172] (0xc002b0ac60) Data frame received for 1 I0514 13:40:20.170770 6 log.go:172] (0xc001e62780) (1) Data frame handling I0514 13:40:20.170799 6 log.go:172] (0xc001e62780) (1) Data frame sent I0514 13:40:20.170818 6 log.go:172] (0xc002b0ac60) (0xc001e62780) Stream removed, broadcasting: 1 I0514 13:40:20.170838 6 log.go:172] (0xc002b0ac60) Go away received I0514 13:40:20.170980 6 log.go:172] (0xc002b0ac60) (0xc001e62780) Stream removed, broadcasting: 1 I0514 13:40:20.171002 6 log.go:172] (0xc002b0ac60) (0xc0005305a0) Stream removed, broadcasting: 3 I0514 13:40:20.171012 6 log.go:172] (0xc002b0ac60) (0xc001e62820) Stream removed, broadcasting: 5 May 14 13:40:20.171: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:40:20.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5114" for this suite. May 14 13:40:44.192: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:40:44.271: INFO: namespace pod-network-test-5114 deletion completed in 24.09591848s • [SLOW TEST:50.724 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:40:44.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-d071826f-aef7-45b1-bf34-8f6d3b723f3e STEP: Creating a pod to test consume secrets May 14 13:40:44.400: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-03a746df-a96b-42cb-adf6-907fee9d5d2e" in namespace "projected-6931" to be "success or failure" May 14 13:40:44.403: INFO: Pod "pod-projected-secrets-03a746df-a96b-42cb-adf6-907fee9d5d2e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.200979ms May 14 13:40:46.406: INFO: Pod "pod-projected-secrets-03a746df-a96b-42cb-adf6-907fee9d5d2e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005988145s May 14 13:40:48.411: INFO: Pod "pod-projected-secrets-03a746df-a96b-42cb-adf6-907fee9d5d2e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010467546s STEP: Saw pod success May 14 13:40:48.411: INFO: Pod "pod-projected-secrets-03a746df-a96b-42cb-adf6-907fee9d5d2e" satisfied condition "success or failure" May 14 13:40:48.414: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-03a746df-a96b-42cb-adf6-907fee9d5d2e container projected-secret-volume-test: STEP: delete the pod May 14 13:40:48.434: INFO: Waiting for pod pod-projected-secrets-03a746df-a96b-42cb-adf6-907fee9d5d2e to disappear May 14 13:40:48.438: INFO: Pod pod-projected-secrets-03a746df-a96b-42cb-adf6-907fee9d5d2e no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:40:48.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6931" for this suite. May 14 13:40:54.466: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:40:54.544: INFO: namespace projected-6931 deletion completed in 6.102413809s • [SLOW TEST:10.273 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:40:54.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-9404, will wait for the garbage collector to delete the pods May 14 13:41:00.693: INFO: Deleting Job.batch foo took: 6.343684ms May 14 13:41:00.993: INFO: Terminating Job.batch foo pods took: 300.227671ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:41:42.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-9404" for this suite. May 14 13:41:48.822: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:41:48.890: INFO: namespace job-9404 deletion completed in 6.124605955s • [SLOW TEST:54.346 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:41:48.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 14 13:41:49.014: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 14 13:41:49.090: INFO: Number of nodes with available pods: 0 May 14 13:41:49.090: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 14 13:41:49.120: INFO: Number of nodes with available pods: 0 May 14 13:41:49.120: INFO: Node iruya-worker is running more than one daemon pod May 14 13:41:50.125: INFO: Number of nodes with available pods: 0 May 14 13:41:50.125: INFO: Node iruya-worker is running more than one daemon pod May 14 13:41:51.151: INFO: Number of nodes with available pods: 0 May 14 13:41:51.151: INFO: Node iruya-worker is running more than one daemon pod May 14 13:41:52.125: INFO: Number of nodes with available pods: 0 May 14 13:41:52.125: INFO: Node iruya-worker is running more than one daemon pod May 14 13:41:53.124: INFO: Number of nodes with available pods: 1 May 14 13:41:53.124: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 14 13:41:53.207: INFO: Number of nodes with available pods: 1 May 14 13:41:53.207: INFO: Number of running nodes: 0, number of available pods: 1 May 14 13:41:54.209: INFO: Number of nodes with available pods: 0 May 14 13:41:54.210: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 14 13:41:54.226: INFO: Number of nodes with available pods: 0 May 14 13:41:54.226: INFO: Node iruya-worker is running more than one daemon pod May 14 13:41:55.230: INFO: Number of nodes with available pods: 0 May 14 13:41:55.230: INFO: Node iruya-worker is running more than one daemon pod May 14 13:41:56.343: INFO: Number of nodes with available pods: 0 May 14 13:41:56.343: INFO: Node iruya-worker is running more than one daemon pod May 14 13:41:57.230: INFO: Number of nodes with available pods: 0 May 14 13:41:57.230: INFO: Node iruya-worker is running more than one daemon pod May 14 13:41:58.230: INFO: Number of nodes with available pods: 0 May 14 13:41:58.230: INFO: Node iruya-worker is running more than one daemon pod May 14 13:41:59.230: INFO: Number of nodes with available pods: 0 May 14 13:41:59.230: INFO: Node iruya-worker is running more than one daemon pod May 14 13:42:00.230: INFO: Number of nodes with available pods: 0 May 14 13:42:00.230: INFO: Node iruya-worker is running more than one daemon pod May 14 13:42:01.230: INFO: Number of nodes with available pods: 0 May 14 13:42:01.230: INFO: Node iruya-worker is running more than one daemon pod May 14 13:42:02.244: INFO: Number of nodes with available pods: 0 May 14 13:42:02.244: INFO: Node iruya-worker is running more than one daemon pod May 14 13:42:03.229: INFO: Number of nodes with available pods: 0 May 14 13:42:03.229: INFO: Node iruya-worker is running more than one daemon pod May 14 13:42:04.230: INFO: Number of nodes with available pods: 0 May 14 13:42:04.230: INFO: Node iruya-worker is running more than one daemon pod May 14 13:42:05.230: INFO: Number of nodes with available pods: 0 May 14 13:42:05.230: INFO: Node iruya-worker is running more than one daemon pod May 14 13:42:06.230: INFO: Number of nodes with available pods: 1 May 14 13:42:06.230: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8527, will wait for the garbage collector to delete the pods May 14 13:42:06.295: INFO: Deleting DaemonSet.extensions daemon-set took: 6.575713ms May 14 13:42:06.596: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.273049ms May 14 13:42:12.199: INFO: Number of nodes with available pods: 0 May 14 13:42:12.199: INFO: Number of running nodes: 0, number of available pods: 0 May 14 13:42:12.200: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8527/daemonsets","resourceVersion":"10861290"},"items":null} May 14 13:42:12.202: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8527/pods","resourceVersion":"10861290"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:42:12.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8527" for this suite. May 14 13:42:18.240: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:42:18.304: INFO: namespace daemonsets-8527 deletion completed in 6.078710988s • [SLOW TEST:29.414 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:42:18.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 14 13:42:28.404: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 14 13:42:28.426: INFO: Pod pod-with-poststart-http-hook still exists May 14 13:42:30.426: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 14 13:42:30.431: INFO: Pod pod-with-poststart-http-hook still exists May 14 13:42:32.426: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 14 13:42:32.431: INFO: Pod pod-with-poststart-http-hook still exists May 14 13:42:34.426: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 14 13:42:34.430: INFO: Pod pod-with-poststart-http-hook still exists May 14 13:42:36.426: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 14 13:42:36.431: INFO: Pod pod-with-poststart-http-hook still exists May 14 13:42:38.426: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 14 13:42:38.431: INFO: Pod pod-with-poststart-http-hook still exists May 14 13:42:40.426: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 14 13:42:40.431: INFO: Pod pod-with-poststart-http-hook still exists May 14 13:42:42.426: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 14 13:42:42.431: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:42:42.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4341" for this suite. May 14 13:43:04.453: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:43:04.526: INFO: namespace container-lifecycle-hook-4341 deletion completed in 22.090946548s • [SLOW TEST:46.222 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:43:04.527: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:43:09.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6985" for this suite. May 14 13:43:31.753: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:43:31.860: INFO: namespace replication-controller-6985 deletion completed in 22.152924091s • [SLOW TEST:27.333 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:43:31.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 14 13:43:31.958: INFO: Waiting up to 5m0s for pod "downwardapi-volume-86545d2e-1631-4de5-bf54-f147d6729988" in namespace "projected-4024" to be "success or failure" May 14 13:43:31.962: INFO: Pod "downwardapi-volume-86545d2e-1631-4de5-bf54-f147d6729988": Phase="Pending", Reason="", readiness=false. Elapsed: 4.130248ms May 14 13:43:33.965: INFO: Pod "downwardapi-volume-86545d2e-1631-4de5-bf54-f147d6729988": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007214263s May 14 13:43:35.969: INFO: Pod "downwardapi-volume-86545d2e-1631-4de5-bf54-f147d6729988": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011228033s STEP: Saw pod success May 14 13:43:35.969: INFO: Pod "downwardapi-volume-86545d2e-1631-4de5-bf54-f147d6729988" satisfied condition "success or failure" May 14 13:43:35.972: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-86545d2e-1631-4de5-bf54-f147d6729988 container client-container: STEP: delete the pod May 14 13:43:36.062: INFO: Waiting for pod downwardapi-volume-86545d2e-1631-4de5-bf54-f147d6729988 to disappear May 14 13:43:36.079: INFO: Pod downwardapi-volume-86545d2e-1631-4de5-bf54-f147d6729988 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:43:36.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4024" for this suite. May 14 13:43:42.149: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:43:42.238: INFO: namespace projected-4024 deletion completed in 6.156678463s • [SLOW TEST:10.378 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:43:42.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 14 13:43:50.383: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 14 13:43:50.400: INFO: Pod pod-with-prestop-exec-hook still exists May 14 13:43:52.401: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 14 13:43:52.405: INFO: Pod pod-with-prestop-exec-hook still exists May 14 13:43:54.401: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 14 13:43:54.405: INFO: Pod pod-with-prestop-exec-hook still exists May 14 13:43:56.401: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 14 13:43:56.404: INFO: Pod pod-with-prestop-exec-hook still exists May 14 13:43:58.401: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 14 13:43:58.405: INFO: Pod pod-with-prestop-exec-hook still exists May 14 13:44:00.401: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 14 13:44:00.411: INFO: Pod pod-with-prestop-exec-hook still exists May 14 13:44:02.401: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 14 13:44:02.405: INFO: Pod pod-with-prestop-exec-hook still exists May 14 13:44:04.401: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 14 13:44:04.405: INFO: Pod pod-with-prestop-exec-hook still exists May 14 13:44:06.401: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 14 13:44:06.416: INFO: Pod pod-with-prestop-exec-hook still exists May 14 13:44:08.401: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 14 13:44:08.405: INFO: Pod pod-with-prestop-exec-hook still exists May 14 13:44:10.401: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 14 13:44:10.404: INFO: Pod pod-with-prestop-exec-hook still exists May 14 13:44:12.401: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 14 13:44:12.404: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:44:12.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4054" for this suite. May 14 13:44:34.428: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:44:34.503: INFO: namespace container-lifecycle-hook-4054 deletion completed in 22.090346611s • [SLOW TEST:52.263 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:44:34.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 14 13:44:34.682: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:44:38.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4600" for this suite. May 14 13:45:18.798: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:45:18.871: INFO: namespace pods-4600 deletion completed in 40.133341935s • [SLOW TEST:44.368 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:45:18.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:45:23.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-3754" for this suite. May 14 13:45:29.082: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:45:29.164: INFO: namespace emptydir-wrapper-3754 deletion completed in 6.098066856s • [SLOW TEST:10.293 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:45:29.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-3410 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-3410 STEP: Creating statefulset with conflicting port in namespace statefulset-3410 STEP: Waiting until pod test-pod will start running in namespace statefulset-3410 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-3410 May 14 13:45:33.327: INFO: Observed stateful pod in namespace: statefulset-3410, name: ss-0, uid: b6e07786-7873-427f-8938-d43887cf6f00, status phase: Pending. Waiting for statefulset controller to delete. May 14 13:45:42.148: INFO: Observed stateful pod in namespace: statefulset-3410, name: ss-0, uid: b6e07786-7873-427f-8938-d43887cf6f00, status phase: Failed. Waiting for statefulset controller to delete. May 14 13:45:42.156: INFO: Observed stateful pod in namespace: statefulset-3410, name: ss-0, uid: b6e07786-7873-427f-8938-d43887cf6f00, status phase: Failed. Waiting for statefulset controller to delete. May 14 13:45:42.180: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-3410 STEP: Removing pod with conflicting port in namespace statefulset-3410 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-3410 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 May 14 13:45:46.356: INFO: Deleting all statefulset in ns statefulset-3410 May 14 13:45:46.359: INFO: Scaling statefulset ss to 0 May 14 13:45:56.376: INFO: Waiting for statefulset status.replicas updated to 0 May 14 13:45:56.379: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:45:56.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3410" for this suite. May 14 13:46:02.474: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:46:02.551: INFO: namespace statefulset-3410 deletion completed in 6.137257509s • [SLOW TEST:33.386 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:46:02.551: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod May 14 13:46:09.209: INFO: Successfully updated pod "labelsupdate5361a902-3f63-4c62-ab24-a038ed7f4188" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:46:11.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7888" for this suite. May 14 13:46:33.274: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:46:33.334: INFO: namespace downward-api-7888 deletion completed in 22.088285864s • [SLOW TEST:30.783 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:46:33.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override all May 14 13:46:33.446: INFO: Waiting up to 5m0s for pod "client-containers-15236d19-3880-465f-8050-088e258239a2" in namespace "containers-2263" to be "success or failure" May 14 13:46:33.450: INFO: Pod "client-containers-15236d19-3880-465f-8050-088e258239a2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.957848ms May 14 13:46:35.659: INFO: Pod "client-containers-15236d19-3880-465f-8050-088e258239a2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212979961s May 14 13:46:37.663: INFO: Pod "client-containers-15236d19-3880-465f-8050-088e258239a2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.217162619s May 14 13:46:39.667: INFO: Pod "client-containers-15236d19-3880-465f-8050-088e258239a2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.220834182s STEP: Saw pod success May 14 13:46:39.667: INFO: Pod "client-containers-15236d19-3880-465f-8050-088e258239a2" satisfied condition "success or failure" May 14 13:46:39.670: INFO: Trying to get logs from node iruya-worker2 pod client-containers-15236d19-3880-465f-8050-088e258239a2 container test-container: STEP: delete the pod May 14 13:46:39.758: INFO: Waiting for pod client-containers-15236d19-3880-465f-8050-088e258239a2 to disappear May 14 13:46:39.803: INFO: Pod client-containers-15236d19-3880-465f-8050-088e258239a2 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:46:39.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2263" for this suite. May 14 13:46:45.831: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:46:45.891: INFO: namespace containers-2263 deletion completed in 6.084449453s • [SLOW TEST:12.556 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:46:45.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0514 13:46:47.060860 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 14 13:46:47.060: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:46:47.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4531" for this suite. May 14 13:46:53.111: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:46:53.185: INFO: namespace gc-4531 deletion completed in 6.121945713s • [SLOW TEST:7.294 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:46:53.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on tmpfs May 14 13:46:53.229: INFO: Waiting up to 5m0s for pod "pod-9155919f-2d49-488e-8036-44a7c572051b" in namespace "emptydir-4794" to be "success or failure" May 14 13:46:53.249: INFO: Pod "pod-9155919f-2d49-488e-8036-44a7c572051b": Phase="Pending", Reason="", readiness=false. Elapsed: 19.911043ms May 14 13:46:55.305: INFO: Pod "pod-9155919f-2d49-488e-8036-44a7c572051b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075104469s May 14 13:46:57.308: INFO: Pod "pod-9155919f-2d49-488e-8036-44a7c572051b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.078542668s STEP: Saw pod success May 14 13:46:57.308: INFO: Pod "pod-9155919f-2d49-488e-8036-44a7c572051b" satisfied condition "success or failure" May 14 13:46:57.311: INFO: Trying to get logs from node iruya-worker pod pod-9155919f-2d49-488e-8036-44a7c572051b container test-container: STEP: delete the pod May 14 13:46:57.331: INFO: Waiting for pod pod-9155919f-2d49-488e-8036-44a7c572051b to disappear May 14 13:46:57.335: INFO: Pod pod-9155919f-2d49-488e-8036-44a7c572051b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:46:57.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4794" for this suite. May 14 13:47:03.351: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:47:03.437: INFO: namespace emptydir-4794 deletion completed in 6.09889669s • [SLOW TEST:10.251 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:47:03.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:47:03.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9998" for this suite. May 14 13:47:09.570: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:47:09.650: INFO: namespace services-9998 deletion completed in 6.123591887s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:6.212 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:47:09.651: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 14 13:47:09.750: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 14 13:47:14.754: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 14 13:47:14.754: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 14 13:47:16.790: INFO: Creating deployment "test-rollover-deployment" May 14 13:47:16.811: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 14 13:47:18.829: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 14 13:47:18.835: INFO: Ensure that both replica sets have 1 created replica May 14 13:47:18.840: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 14 13:47:18.846: INFO: Updating deployment test-rollover-deployment May 14 13:47:18.846: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 14 13:47:20.907: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 14 13:47:20.913: INFO: Make sure deployment "test-rollover-deployment" is complete May 14 13:47:20.925: INFO: all replica sets need to contain the pod-template-hash label May 14 13:47:20.925: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725060836, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725060836, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725060839, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725060836, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 14 13:47:22.938: INFO: all replica sets need to contain the pod-template-hash label May 14 13:47:22.938: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725060836, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725060836, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725060839, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725060836, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 14 13:47:24.931: INFO: all replica sets need to contain the pod-template-hash label May 14 13:47:24.931: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725060836, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725060836, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725060843, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725060836, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 14 13:47:26.933: INFO: all replica sets need to contain the pod-template-hash label May 14 13:47:26.933: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725060836, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725060836, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725060843, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725060836, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 14 13:47:28.933: INFO: all replica sets need to contain the pod-template-hash label May 14 13:47:28.933: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725060836, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725060836, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725060843, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725060836, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 14 13:47:30.934: INFO: all replica sets need to contain the pod-template-hash label May 14 13:47:30.934: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725060836, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725060836, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725060843, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725060836, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 14 13:47:32.932: INFO: all replica sets need to contain the pod-template-hash label May 14 13:47:32.932: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725060836, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725060836, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725060843, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725060836, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 14 13:47:34.931: INFO: May 14 13:47:34.931: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 May 14 13:47:34.937: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-9579,SelfLink:/apis/apps/v1/namespaces/deployment-9579/deployments/test-rollover-deployment,UID:2ee1a259-42f7-4ac5-9eab-a56b791529b3,ResourceVersion:10862456,Generation:2,CreationTimestamp:2020-05-14 13:47:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-05-14 13:47:16 +0000 UTC 2020-05-14 13:47:16 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-05-14 13:47:34 +0000 UTC 2020-05-14 13:47:16 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} May 14 13:47:34.939: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-9579,SelfLink:/apis/apps/v1/namespaces/deployment-9579/replicasets/test-rollover-deployment-854595fc44,UID:9d687894-544f-42da-97db-1dd5686a4572,ResourceVersion:10862445,Generation:2,CreationTimestamp:2020-05-14 13:47:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 2ee1a259-42f7-4ac5-9eab-a56b791529b3 0xc0017f4d97 0xc0017f4d98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 14 13:47:34.939: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 14 13:47:34.940: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-9579,SelfLink:/apis/apps/v1/namespaces/deployment-9579/replicasets/test-rollover-controller,UID:92363366-a5f8-49d1-a8fe-0fff0349659d,ResourceVersion:10862455,Generation:2,CreationTimestamp:2020-05-14 13:47:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 2ee1a259-42f7-4ac5-9eab-a56b791529b3 0xc0017f4caf 0xc0017f4cc0}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 14 13:47:34.940: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-9579,SelfLink:/apis/apps/v1/namespaces/deployment-9579/replicasets/test-rollover-deployment-9b8b997cf,UID:5b69ecf3-2f17-4c5c-b07a-c2b0f2f95aee,ResourceVersion:10862407,Generation:2,CreationTimestamp:2020-05-14 13:47:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 2ee1a259-42f7-4ac5-9eab-a56b791529b3 0xc0017f4e60 0xc0017f4e61}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 14 13:47:34.942: INFO: Pod "test-rollover-deployment-854595fc44-lfrw2" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-lfrw2,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-9579,SelfLink:/api/v1/namespaces/deployment-9579/pods/test-rollover-deployment-854595fc44-lfrw2,UID:f5e0c148-476b-4fb8-9ee6-56b0b63c74e8,ResourceVersion:10862423,Generation:0,CreationTimestamp:2020-05-14 13:47:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 9d687894-544f-42da-97db-1dd5686a4572 0xc0017f5a47 0xc0017f5a48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lj8qt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lj8qt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-lj8qt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0017f5ac0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0017f5ae0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 13:47:18 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 13:47:23 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 13:47:23 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 13:47:18 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.91,StartTime:2020-05-14 13:47:18 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-05-14 13:47:23 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://99a3ece12ed71b7c06d22bfdfbd6816fb18dd8717764f7046220010d2b306f31}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:47:34.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9579" for this suite. May 14 13:47:42.964: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:47:43.044: INFO: namespace deployment-9579 deletion completed in 8.099970625s • [SLOW TEST:33.393 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:47:43.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 14 13:47:43.155: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-3179,SelfLink:/api/v1/namespaces/watch-3179/configmaps/e2e-watch-test-resource-version,UID:f15ccca7-e691-4b3c-93b5-1a978b34c9cb,ResourceVersion:10862518,Generation:0,CreationTimestamp:2020-05-14 13:47:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 14 13:47:43.156: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-3179,SelfLink:/api/v1/namespaces/watch-3179/configmaps/e2e-watch-test-resource-version,UID:f15ccca7-e691-4b3c-93b5-1a978b34c9cb,ResourceVersion:10862519,Generation:0,CreationTimestamp:2020-05-14 13:47:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:47:43.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3179" for this suite. May 14 13:47:49.180: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:47:49.235: INFO: namespace watch-3179 deletion completed in 6.063828058s • [SLOW TEST:6.191 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:47:49.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-397 STEP: creating a selector STEP: Creating the service pods in kubernetes May 14 13:47:49.324: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 14 13:48:19.536: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.143:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-397 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 14 13:48:19.536: INFO: >>> kubeConfig: /root/.kube/config I0514 13:48:19.563362 6 log.go:172] (0xc002811ce0) (0xc002764b40) Create stream I0514 13:48:19.563390 6 log.go:172] (0xc002811ce0) (0xc002764b40) Stream added, broadcasting: 1 I0514 13:48:19.565309 6 log.go:172] (0xc002811ce0) Reply frame received for 1 I0514 13:48:19.565352 6 log.go:172] (0xc002811ce0) (0xc0012e7ea0) Create stream I0514 13:48:19.565365 6 log.go:172] (0xc002811ce0) (0xc0012e7ea0) Stream added, broadcasting: 3 I0514 13:48:19.566454 6 log.go:172] (0xc002811ce0) Reply frame received for 3 I0514 13:48:19.566488 6 log.go:172] (0xc002811ce0) (0xc00248fc20) Create stream I0514 13:48:19.566500 6 log.go:172] (0xc002811ce0) (0xc00248fc20) Stream added, broadcasting: 5 I0514 13:48:19.567348 6 log.go:172] (0xc002811ce0) Reply frame received for 5 I0514 13:48:19.649282 6 log.go:172] (0xc002811ce0) Data frame received for 3 I0514 13:48:19.649396 6 log.go:172] (0xc0012e7ea0) (3) Data frame handling I0514 13:48:19.649416 6 log.go:172] (0xc0012e7ea0) (3) Data frame sent I0514 13:48:19.649424 6 log.go:172] (0xc002811ce0) Data frame received for 3 I0514 13:48:19.649457 6 log.go:172] (0xc0012e7ea0) (3) Data frame handling I0514 13:48:19.649705 6 log.go:172] (0xc002811ce0) Data frame received for 5 I0514 13:48:19.649729 6 log.go:172] (0xc00248fc20) (5) Data frame handling I0514 13:48:19.651132 6 log.go:172] (0xc002811ce0) Data frame received for 1 I0514 13:48:19.651160 6 log.go:172] (0xc002764b40) (1) Data frame handling I0514 13:48:19.651177 6 log.go:172] (0xc002764b40) (1) Data frame sent I0514 13:48:19.651196 6 log.go:172] (0xc002811ce0) (0xc002764b40) Stream removed, broadcasting: 1 I0514 13:48:19.651313 6 log.go:172] (0xc002811ce0) (0xc002764b40) Stream removed, broadcasting: 1 I0514 13:48:19.651325 6 log.go:172] (0xc002811ce0) (0xc0012e7ea0) Stream removed, broadcasting: 3 I0514 13:48:19.651489 6 log.go:172] (0xc002811ce0) Go away received I0514 13:48:19.651542 6 log.go:172] (0xc002811ce0) (0xc00248fc20) Stream removed, broadcasting: 5 May 14 13:48:19.651: INFO: Found all expected endpoints: [netserver-0] May 14 13:48:19.655: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.92:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-397 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 14 13:48:19.655: INFO: >>> kubeConfig: /root/.kube/config I0514 13:48:19.685360 6 log.go:172] (0xc002c88b00) (0xc00248fe00) Create stream I0514 13:48:19.685396 6 log.go:172] (0xc002c88b00) (0xc00248fe00) Stream added, broadcasting: 1 I0514 13:48:19.687133 6 log.go:172] (0xc002c88b00) Reply frame received for 1 I0514 13:48:19.687180 6 log.go:172] (0xc002c88b00) (0xc002764c80) Create stream I0514 13:48:19.687191 6 log.go:172] (0xc002c88b00) (0xc002764c80) Stream added, broadcasting: 3 I0514 13:48:19.688010 6 log.go:172] (0xc002c88b00) Reply frame received for 3 I0514 13:48:19.688044 6 log.go:172] (0xc002c88b00) (0xc002764d20) Create stream I0514 13:48:19.688055 6 log.go:172] (0xc002c88b00) (0xc002764d20) Stream added, broadcasting: 5 I0514 13:48:19.688802 6 log.go:172] (0xc002c88b00) Reply frame received for 5 I0514 13:48:19.764350 6 log.go:172] (0xc002c88b00) Data frame received for 3 I0514 13:48:19.764394 6 log.go:172] (0xc002764c80) (3) Data frame handling I0514 13:48:19.764450 6 log.go:172] (0xc002764c80) (3) Data frame sent I0514 13:48:19.764623 6 log.go:172] (0xc002c88b00) Data frame received for 5 I0514 13:48:19.764647 6 log.go:172] (0xc002c88b00) Data frame received for 3 I0514 13:48:19.764713 6 log.go:172] (0xc002764c80) (3) Data frame handling I0514 13:48:19.764751 6 log.go:172] (0xc002764d20) (5) Data frame handling I0514 13:48:19.766479 6 log.go:172] (0xc002c88b00) Data frame received for 1 I0514 13:48:19.766493 6 log.go:172] (0xc00248fe00) (1) Data frame handling I0514 13:48:19.766503 6 log.go:172] (0xc00248fe00) (1) Data frame sent I0514 13:48:19.766589 6 log.go:172] (0xc002c88b00) (0xc00248fe00) Stream removed, broadcasting: 1 I0514 13:48:19.766666 6 log.go:172] (0xc002c88b00) (0xc00248fe00) Stream removed, broadcasting: 1 I0514 13:48:19.766676 6 log.go:172] (0xc002c88b00) (0xc002764c80) Stream removed, broadcasting: 3 I0514 13:48:19.766683 6 log.go:172] (0xc002c88b00) (0xc002764d20) Stream removed, broadcasting: 5 I0514 13:48:19.766700 6 log.go:172] (0xc002c88b00) Go away received May 14 13:48:19.766: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:48:19.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-397" for this suite. May 14 13:48:41.800: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:48:41.872: INFO: namespace pod-network-test-397 deletion completed in 22.101527769s • [SLOW TEST:52.636 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:48:41.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes May 14 13:48:45.997: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice May 14 13:48:51.101: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:48:51.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6829" for this suite. May 14 13:48:57.139: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:48:57.218: INFO: namespace pods-6829 deletion completed in 6.109203745s • [SLOW TEST:15.346 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:48:57.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0514 13:49:27.922709 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 14 13:49:27.922: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:49:27.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4671" for this suite. May 14 13:49:33.978: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:49:34.169: INFO: namespace gc-4671 deletion completed in 6.243724777s • [SLOW TEST:36.951 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:49:34.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-ededbc3a-2419-4f5c-956e-c37674680000 STEP: Creating a pod to test consume configMaps May 14 13:49:34.840: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3a4e8d63-f7d1-46d5-b574-32a90e94e3b1" in namespace "projected-8920" to be "success or failure" May 14 13:49:35.012: INFO: Pod "pod-projected-configmaps-3a4e8d63-f7d1-46d5-b574-32a90e94e3b1": Phase="Pending", Reason="", readiness=false. Elapsed: 171.799824ms May 14 13:49:37.062: INFO: Pod "pod-projected-configmaps-3a4e8d63-f7d1-46d5-b574-32a90e94e3b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.221623605s May 14 13:49:39.066: INFO: Pod "pod-projected-configmaps-3a4e8d63-f7d1-46d5-b574-32a90e94e3b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.225622264s STEP: Saw pod success May 14 13:49:39.066: INFO: Pod "pod-projected-configmaps-3a4e8d63-f7d1-46d5-b574-32a90e94e3b1" satisfied condition "success or failure" May 14 13:49:39.068: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-3a4e8d63-f7d1-46d5-b574-32a90e94e3b1 container projected-configmap-volume-test: STEP: delete the pod May 14 13:49:39.134: INFO: Waiting for pod pod-projected-configmaps-3a4e8d63-f7d1-46d5-b574-32a90e94e3b1 to disappear May 14 13:49:39.148: INFO: Pod pod-projected-configmaps-3a4e8d63-f7d1-46d5-b574-32a90e94e3b1 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:49:39.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8920" for this suite. May 14 13:49:45.163: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:49:45.237: INFO: namespace projected-8920 deletion completed in 6.085941387s • [SLOW TEST:11.067 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:49:45.237: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 14 13:49:45.288: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:49:46.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4178" for this suite. May 14 13:49:52.415: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:49:52.476: INFO: namespace custom-resource-definition-4178 deletion completed in 6.073944814s • [SLOW TEST:7.239 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:49:52.477: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod May 14 13:49:52.574: INFO: PodSpec: initContainers in spec.initContainers May 14 13:50:45.764: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-7cd5d8e0-5394-48cd-ba3c-1d9bd294e590", GenerateName:"", Namespace:"init-container-478", SelfLink:"/api/v1/namespaces/init-container-478/pods/pod-init-7cd5d8e0-5394-48cd-ba3c-1d9bd294e590", UID:"e7db36e1-0a25-43ce-ab7f-cd3ead767fb6", ResourceVersion:"10863102", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63725060992, loc:(*time.Location)(0x7ead8c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"574238125"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-jzg72", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0032d2000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-jzg72", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-jzg72", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-jzg72", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002fa0088), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002c28000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002fa0110)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002fa0130)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002fa0138), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002fa013c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725060992, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725060992, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725060992, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725060992, loc:(*time.Location)(0x7ead8c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.6", PodIP:"10.244.2.148", StartTime:(*v1.Time)(0xc00254c060), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001c0a230)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001c0a310)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://f56961c38d59861f384e76574887c37b2376d5bf91a8f761f109d69f5bb7d13d"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00254c0a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00254c080), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:50:45.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-478" for this suite. May 14 13:51:07.818: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:51:07.884: INFO: namespace init-container-478 deletion completed in 22.115420239s • [SLOW TEST:75.407 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:51:07.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-034e682c-cc09-4ce3-9925-7b5318589dc1 STEP: Creating a pod to test consume configMaps May 14 13:51:07.960: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bd514e2d-0bb3-4c6b-8693-ac40d5020c96" in namespace "projected-6052" to be "success or failure" May 14 13:51:07.979: INFO: Pod "pod-projected-configmaps-bd514e2d-0bb3-4c6b-8693-ac40d5020c96": Phase="Pending", Reason="", readiness=false. Elapsed: 18.863199ms May 14 13:51:09.983: INFO: Pod "pod-projected-configmaps-bd514e2d-0bb3-4c6b-8693-ac40d5020c96": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023389859s May 14 13:51:11.988: INFO: Pod "pod-projected-configmaps-bd514e2d-0bb3-4c6b-8693-ac40d5020c96": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028055122s May 14 13:51:13.992: INFO: Pod "pod-projected-configmaps-bd514e2d-0bb3-4c6b-8693-ac40d5020c96": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.032742835s STEP: Saw pod success May 14 13:51:13.993: INFO: Pod "pod-projected-configmaps-bd514e2d-0bb3-4c6b-8693-ac40d5020c96" satisfied condition "success or failure" May 14 13:51:13.996: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-bd514e2d-0bb3-4c6b-8693-ac40d5020c96 container projected-configmap-volume-test: STEP: delete the pod May 14 13:51:14.035: INFO: Waiting for pod pod-projected-configmaps-bd514e2d-0bb3-4c6b-8693-ac40d5020c96 to disappear May 14 13:51:14.048: INFO: Pod pod-projected-configmaps-bd514e2d-0bb3-4c6b-8693-ac40d5020c96 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:51:14.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6052" for this suite. May 14 13:51:20.064: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:51:20.174: INFO: namespace projected-6052 deletion completed in 6.121553879s • [SLOW TEST:12.290 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:51:20.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 May 14 13:51:20.287: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 14 13:51:20.294: INFO: Waiting for terminating namespaces to be deleted... May 14 13:51:20.296: INFO: Logging pods the kubelet thinks is on node iruya-worker before test May 14 13:51:20.300: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 14 13:51:20.300: INFO: Container kube-proxy ready: true, restart count 0 May 14 13:51:20.300: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 14 13:51:20.300: INFO: Container kindnet-cni ready: true, restart count 0 May 14 13:51:20.300: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test May 14 13:51:20.304: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) May 14 13:51:20.304: INFO: Container kube-proxy ready: true, restart count 0 May 14 13:51:20.304: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) May 14 13:51:20.304: INFO: Container kindnet-cni ready: true, restart count 0 May 14 13:51:20.304: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) May 14 13:51:20.304: INFO: Container coredns ready: true, restart count 0 May 14 13:51:20.304: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) May 14 13:51:20.304: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-223218fe-ba3d-4fb1-b34b-364a368d3bf2 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-223218fe-ba3d-4fb1-b34b-364a368d3bf2 off the node iruya-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-223218fe-ba3d-4fb1-b34b-364a368d3bf2 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:51:28.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1361" for this suite. May 14 13:51:46.531: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:51:46.590: INFO: namespace sched-pred-1361 deletion completed in 18.080265546s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:26.416 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:51:46.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-9f2c13ec-e5a7-483f-836d-65d4ece6e342 in namespace container-probe-9968 May 14 13:51:50.703: INFO: Started pod busybox-9f2c13ec-e5a7-483f-836d-65d4ece6e342 in namespace container-probe-9968 STEP: checking the pod's current state and verifying that restartCount is present May 14 13:51:50.707: INFO: Initial restart count of pod busybox-9f2c13ec-e5a7-483f-836d-65d4ece6e342 is 0 May 14 13:52:40.819: INFO: Restart count of pod container-probe-9968/busybox-9f2c13ec-e5a7-483f-836d-65d4ece6e342 is now 1 (50.11246654s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:52:40.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9968" for this suite. May 14 13:52:46.906: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:52:46.976: INFO: namespace container-probe-9968 deletion completed in 6.0796385s • [SLOW TEST:60.385 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:52:46.976: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-5432 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-5432 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5432 May 14 13:52:47.061: INFO: Found 0 stateful pods, waiting for 1 May 14 13:52:57.067: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 14 13:52:57.070: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5432 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 14 13:53:00.131: INFO: stderr: "I0514 13:52:59.977257 1586 log.go:172] (0xc000a38210) (0xc000642780) Create stream\nI0514 13:52:59.977294 1586 log.go:172] (0xc000a38210) (0xc000642780) Stream added, broadcasting: 1\nI0514 13:52:59.979972 1586 log.go:172] (0xc000a38210) Reply frame received for 1\nI0514 13:52:59.980014 1586 log.go:172] (0xc000a38210) (0xc0002ee000) Create stream\nI0514 13:52:59.980029 1586 log.go:172] (0xc000a38210) (0xc0002ee000) Stream added, broadcasting: 3\nI0514 13:52:59.980625 1586 log.go:172] (0xc000a38210) Reply frame received for 3\nI0514 13:52:59.980647 1586 log.go:172] (0xc000a38210) (0xc00033a000) Create stream\nI0514 13:52:59.980654 1586 log.go:172] (0xc000a38210) (0xc00033a000) Stream added, broadcasting: 5\nI0514 13:52:59.981570 1586 log.go:172] (0xc000a38210) Reply frame received for 5\nI0514 13:53:00.060006 1586 log.go:172] (0xc000a38210) Data frame received for 5\nI0514 13:53:00.060031 1586 log.go:172] (0xc00033a000) (5) Data frame handling\nI0514 13:53:00.060051 1586 log.go:172] (0xc00033a000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0514 13:53:00.124124 1586 log.go:172] (0xc000a38210) Data frame received for 3\nI0514 13:53:00.124143 1586 log.go:172] (0xc0002ee000) (3) Data frame handling\nI0514 13:53:00.124149 1586 log.go:172] (0xc0002ee000) (3) Data frame sent\nI0514 13:53:00.124155 1586 log.go:172] (0xc000a38210) Data frame received for 3\nI0514 13:53:00.124158 1586 log.go:172] (0xc0002ee000) (3) Data frame handling\nI0514 13:53:00.124165 1586 log.go:172] (0xc000a38210) Data frame received for 5\nI0514 13:53:00.124169 1586 log.go:172] (0xc00033a000) (5) Data frame handling\nI0514 13:53:00.125836 1586 log.go:172] (0xc000a38210) Data frame received for 1\nI0514 13:53:00.125866 1586 log.go:172] (0xc000642780) (1) Data frame handling\nI0514 13:53:00.125886 1586 log.go:172] (0xc000642780) (1) Data frame sent\nI0514 13:53:00.125903 1586 log.go:172] (0xc000a38210) (0xc000642780) Stream removed, broadcasting: 1\nI0514 13:53:00.125921 1586 log.go:172] (0xc000a38210) Go away received\nI0514 13:53:00.126451 1586 log.go:172] (0xc000a38210) (0xc000642780) Stream removed, broadcasting: 1\nI0514 13:53:00.126474 1586 log.go:172] (0xc000a38210) (0xc0002ee000) Stream removed, broadcasting: 3\nI0514 13:53:00.126485 1586 log.go:172] (0xc000a38210) (0xc00033a000) Stream removed, broadcasting: 5\n" May 14 13:53:00.131: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 14 13:53:00.131: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 14 13:53:00.135: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 14 13:53:10.139: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 14 13:53:10.139: INFO: Waiting for statefulset status.replicas updated to 0 May 14 13:53:10.159: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999682s May 14 13:53:11.163: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.989158045s May 14 13:53:12.166: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.985197507s May 14 13:53:13.171: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.981639664s May 14 13:53:14.175: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.977150867s May 14 13:53:15.180: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.972888228s May 14 13:53:16.185: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.968047567s May 14 13:53:17.189: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.963294525s May 14 13:53:18.193: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.958963252s May 14 13:53:19.199: INFO: Verifying statefulset ss doesn't scale past 1 for another 954.846416ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5432 May 14 13:53:20.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5432 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 14 13:53:20.440: INFO: stderr: "I0514 13:53:20.335477 1609 log.go:172] (0xc0001166e0) (0xc0003006e0) Create stream\nI0514 13:53:20.335554 1609 log.go:172] (0xc0001166e0) (0xc0003006e0) Stream added, broadcasting: 1\nI0514 13:53:20.339050 1609 log.go:172] (0xc0001166e0) Reply frame received for 1\nI0514 13:53:20.339881 1609 log.go:172] (0xc0001166e0) (0xc0006b0000) Create stream\nI0514 13:53:20.339900 1609 log.go:172] (0xc0001166e0) (0xc0006b0000) Stream added, broadcasting: 3\nI0514 13:53:20.341070 1609 log.go:172] (0xc0001166e0) Reply frame received for 3\nI0514 13:53:20.341272 1609 log.go:172] (0xc0001166e0) (0xc0006b00a0) Create stream\nI0514 13:53:20.341296 1609 log.go:172] (0xc0001166e0) (0xc0006b00a0) Stream added, broadcasting: 5\nI0514 13:53:20.342445 1609 log.go:172] (0xc0001166e0) Reply frame received for 5\nI0514 13:53:20.433763 1609 log.go:172] (0xc0001166e0) Data frame received for 5\nI0514 13:53:20.433807 1609 log.go:172] (0xc0006b00a0) (5) Data frame handling\nI0514 13:53:20.433826 1609 log.go:172] (0xc0006b00a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0514 13:53:20.433842 1609 log.go:172] (0xc0001166e0) Data frame received for 3\nI0514 13:53:20.433865 1609 log.go:172] (0xc0006b0000) (3) Data frame handling\nI0514 13:53:20.433885 1609 log.go:172] (0xc0006b0000) (3) Data frame sent\nI0514 13:53:20.433895 1609 log.go:172] (0xc0001166e0) Data frame received for 3\nI0514 13:53:20.433902 1609 log.go:172] (0xc0006b0000) (3) Data frame handling\nI0514 13:53:20.433945 1609 log.go:172] (0xc0001166e0) Data frame received for 5\nI0514 13:53:20.433970 1609 log.go:172] (0xc0006b00a0) (5) Data frame handling\nI0514 13:53:20.435793 1609 log.go:172] (0xc0001166e0) Data frame received for 1\nI0514 13:53:20.435822 1609 log.go:172] (0xc0003006e0) (1) Data frame handling\nI0514 13:53:20.435867 1609 log.go:172] (0xc0003006e0) (1) Data frame sent\nI0514 13:53:20.435891 1609 log.go:172] (0xc0001166e0) (0xc0003006e0) Stream removed, broadcasting: 1\nI0514 13:53:20.435916 1609 log.go:172] (0xc0001166e0) Go away received\nI0514 13:53:20.436272 1609 log.go:172] (0xc0001166e0) (0xc0003006e0) Stream removed, broadcasting: 1\nI0514 13:53:20.436288 1609 log.go:172] (0xc0001166e0) (0xc0006b0000) Stream removed, broadcasting: 3\nI0514 13:53:20.436295 1609 log.go:172] (0xc0001166e0) (0xc0006b00a0) Stream removed, broadcasting: 5\n" May 14 13:53:20.440: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 14 13:53:20.440: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 14 13:53:20.454: INFO: Found 1 stateful pods, waiting for 3 May 14 13:53:30.459: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 14 13:53:30.459: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 14 13:53:30.460: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 14 13:53:30.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5432 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 14 13:53:30.688: INFO: stderr: "I0514 13:53:30.596767 1629 log.go:172] (0xc0006e8c60) (0xc000708b40) Create stream\nI0514 13:53:30.596849 1629 log.go:172] (0xc0006e8c60) (0xc000708b40) Stream added, broadcasting: 1\nI0514 13:53:30.601094 1629 log.go:172] (0xc0006e8c60) Reply frame received for 1\nI0514 13:53:30.601282 1629 log.go:172] (0xc0006e8c60) (0xc000708280) Create stream\nI0514 13:53:30.601298 1629 log.go:172] (0xc0006e8c60) (0xc000708280) Stream added, broadcasting: 3\nI0514 13:53:30.602178 1629 log.go:172] (0xc0006e8c60) Reply frame received for 3\nI0514 13:53:30.602222 1629 log.go:172] (0xc0006e8c60) (0xc000188000) Create stream\nI0514 13:53:30.602237 1629 log.go:172] (0xc0006e8c60) (0xc000188000) Stream added, broadcasting: 5\nI0514 13:53:30.603326 1629 log.go:172] (0xc0006e8c60) Reply frame received for 5\nI0514 13:53:30.681042 1629 log.go:172] (0xc0006e8c60) Data frame received for 3\nI0514 13:53:30.681072 1629 log.go:172] (0xc000708280) (3) Data frame handling\nI0514 13:53:30.681100 1629 log.go:172] (0xc0006e8c60) Data frame received for 5\nI0514 13:53:30.681330 1629 log.go:172] (0xc000188000) (5) Data frame handling\nI0514 13:53:30.681348 1629 log.go:172] (0xc000188000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0514 13:53:30.681373 1629 log.go:172] (0xc000708280) (3) Data frame sent\nI0514 13:53:30.681387 1629 log.go:172] (0xc0006e8c60) Data frame received for 3\nI0514 13:53:30.681396 1629 log.go:172] (0xc000708280) (3) Data frame handling\nI0514 13:53:30.681538 1629 log.go:172] (0xc0006e8c60) Data frame received for 5\nI0514 13:53:30.681577 1629 log.go:172] (0xc000188000) (5) Data frame handling\nI0514 13:53:30.682963 1629 log.go:172] (0xc0006e8c60) Data frame received for 1\nI0514 13:53:30.682987 1629 log.go:172] (0xc000708b40) (1) Data frame handling\nI0514 13:53:30.683000 1629 log.go:172] (0xc000708b40) (1) Data frame sent\nI0514 13:53:30.683017 1629 log.go:172] (0xc0006e8c60) (0xc000708b40) Stream removed, broadcasting: 1\nI0514 13:53:30.683033 1629 log.go:172] (0xc0006e8c60) Go away received\nI0514 13:53:30.683564 1629 log.go:172] (0xc0006e8c60) (0xc000708b40) Stream removed, broadcasting: 1\nI0514 13:53:30.683585 1629 log.go:172] (0xc0006e8c60) (0xc000708280) Stream removed, broadcasting: 3\nI0514 13:53:30.683596 1629 log.go:172] (0xc0006e8c60) (0xc000188000) Stream removed, broadcasting: 5\n" May 14 13:53:30.688: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 14 13:53:30.688: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 14 13:53:30.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5432 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 14 13:53:30.929: INFO: stderr: "I0514 13:53:30.818825 1651 log.go:172] (0xc0001288f0) (0xc0006e4aa0) Create stream\nI0514 13:53:30.818882 1651 log.go:172] (0xc0001288f0) (0xc0006e4aa0) Stream added, broadcasting: 1\nI0514 13:53:30.821427 1651 log.go:172] (0xc0001288f0) Reply frame received for 1\nI0514 13:53:30.821667 1651 log.go:172] (0xc0001288f0) (0xc00092a000) Create stream\nI0514 13:53:30.821698 1651 log.go:172] (0xc0001288f0) (0xc00092a000) Stream added, broadcasting: 3\nI0514 13:53:30.822966 1651 log.go:172] (0xc0001288f0) Reply frame received for 3\nI0514 13:53:30.823000 1651 log.go:172] (0xc0001288f0) (0xc0006e4b40) Create stream\nI0514 13:53:30.823011 1651 log.go:172] (0xc0001288f0) (0xc0006e4b40) Stream added, broadcasting: 5\nI0514 13:53:30.824483 1651 log.go:172] (0xc0001288f0) Reply frame received for 5\nI0514 13:53:30.894319 1651 log.go:172] (0xc0001288f0) Data frame received for 5\nI0514 13:53:30.894351 1651 log.go:172] (0xc0006e4b40) (5) Data frame handling\nI0514 13:53:30.894370 1651 log.go:172] (0xc0006e4b40) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0514 13:53:30.921940 1651 log.go:172] (0xc0001288f0) Data frame received for 3\nI0514 13:53:30.921984 1651 log.go:172] (0xc00092a000) (3) Data frame handling\nI0514 13:53:30.922018 1651 log.go:172] (0xc00092a000) (3) Data frame sent\nI0514 13:53:30.922055 1651 log.go:172] (0xc0001288f0) Data frame received for 5\nI0514 13:53:30.922104 1651 log.go:172] (0xc0006e4b40) (5) Data frame handling\nI0514 13:53:30.922128 1651 log.go:172] (0xc0001288f0) Data frame received for 3\nI0514 13:53:30.922142 1651 log.go:172] (0xc00092a000) (3) Data frame handling\nI0514 13:53:30.924043 1651 log.go:172] (0xc0001288f0) Data frame received for 1\nI0514 13:53:30.924076 1651 log.go:172] (0xc0006e4aa0) (1) Data frame handling\nI0514 13:53:30.924093 1651 log.go:172] (0xc0006e4aa0) (1) Data frame sent\nI0514 13:53:30.924110 1651 log.go:172] (0xc0001288f0) (0xc0006e4aa0) Stream removed, broadcasting: 1\nI0514 13:53:30.924144 1651 log.go:172] (0xc0001288f0) Go away received\nI0514 13:53:30.924714 1651 log.go:172] (0xc0001288f0) (0xc0006e4aa0) Stream removed, broadcasting: 1\nI0514 13:53:30.924741 1651 log.go:172] (0xc0001288f0) (0xc00092a000) Stream removed, broadcasting: 3\nI0514 13:53:30.924754 1651 log.go:172] (0xc0001288f0) (0xc0006e4b40) Stream removed, broadcasting: 5\n" May 14 13:53:30.929: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 14 13:53:30.929: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 14 13:53:30.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5432 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 14 13:53:31.190: INFO: stderr: "I0514 13:53:31.061492 1672 log.go:172] (0xc000a264d0) (0xc00033c6e0) Create stream\nI0514 13:53:31.061544 1672 log.go:172] (0xc000a264d0) (0xc00033c6e0) Stream added, broadcasting: 1\nI0514 13:53:31.064251 1672 log.go:172] (0xc000a264d0) Reply frame received for 1\nI0514 13:53:31.064707 1672 log.go:172] (0xc000a264d0) (0xc00033c780) Create stream\nI0514 13:53:31.065632 1672 log.go:172] (0xc000a264d0) (0xc00033c780) Stream added, broadcasting: 3\nI0514 13:53:31.067279 1672 log.go:172] (0xc000a264d0) Reply frame received for 3\nI0514 13:53:31.067309 1672 log.go:172] (0xc000a264d0) (0xc00033c000) Create stream\nI0514 13:53:31.067317 1672 log.go:172] (0xc000a264d0) (0xc00033c000) Stream added, broadcasting: 5\nI0514 13:53:31.068099 1672 log.go:172] (0xc000a264d0) Reply frame received for 5\nI0514 13:53:31.149867 1672 log.go:172] (0xc000a264d0) Data frame received for 5\nI0514 13:53:31.149901 1672 log.go:172] (0xc00033c000) (5) Data frame handling\nI0514 13:53:31.149924 1672 log.go:172] (0xc00033c000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0514 13:53:31.183225 1672 log.go:172] (0xc000a264d0) Data frame received for 5\nI0514 13:53:31.183270 1672 log.go:172] (0xc000a264d0) Data frame received for 3\nI0514 13:53:31.183328 1672 log.go:172] (0xc00033c780) (3) Data frame handling\nI0514 13:53:31.183353 1672 log.go:172] (0xc00033c780) (3) Data frame sent\nI0514 13:53:31.183367 1672 log.go:172] (0xc000a264d0) Data frame received for 3\nI0514 13:53:31.183379 1672 log.go:172] (0xc00033c780) (3) Data frame handling\nI0514 13:53:31.183402 1672 log.go:172] (0xc00033c000) (5) Data frame handling\nI0514 13:53:31.185107 1672 log.go:172] (0xc000a264d0) Data frame received for 1\nI0514 13:53:31.185323 1672 log.go:172] (0xc00033c6e0) (1) Data frame handling\nI0514 13:53:31.185340 1672 log.go:172] (0xc00033c6e0) (1) Data frame sent\nI0514 13:53:31.185460 1672 log.go:172] (0xc000a264d0) (0xc00033c6e0) Stream removed, broadcasting: 1\nI0514 13:53:31.185765 1672 log.go:172] (0xc000a264d0) (0xc00033c6e0) Stream removed, broadcasting: 1\nI0514 13:53:31.185792 1672 log.go:172] (0xc000a264d0) (0xc00033c780) Stream removed, broadcasting: 3\nI0514 13:53:31.185806 1672 log.go:172] (0xc000a264d0) (0xc00033c000) Stream removed, broadcasting: 5\n" May 14 13:53:31.190: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 14 13:53:31.190: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 14 13:53:31.191: INFO: Waiting for statefulset status.replicas updated to 0 May 14 13:53:31.193: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 May 14 13:53:41.201: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 14 13:53:41.201: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 14 13:53:41.201: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 14 13:53:41.214: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999358s May 14 13:53:42.219: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.9951336s May 14 13:53:43.223: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.989898299s May 14 13:53:44.233: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.985976114s May 14 13:53:45.237: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.97594117s May 14 13:53:46.241: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.97165985s May 14 13:53:47.244: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.967320228s May 14 13:53:48.268: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.964616559s May 14 13:53:49.273: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.940462874s May 14 13:53:50.277: INFO: Verifying statefulset ss doesn't scale past 3 for another 935.664673ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5432 May 14 13:53:51.287: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5432 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 14 13:53:51.526: INFO: stderr: "I0514 13:53:51.415897 1692 log.go:172] (0xc000862370) (0xc000428820) Create stream\nI0514 13:53:51.415965 1692 log.go:172] (0xc000862370) (0xc000428820) Stream added, broadcasting: 1\nI0514 13:53:51.418955 1692 log.go:172] (0xc000862370) Reply frame received for 1\nI0514 13:53:51.419018 1692 log.go:172] (0xc000862370) (0xc000700000) Create stream\nI0514 13:53:51.419036 1692 log.go:172] (0xc000862370) (0xc000700000) Stream added, broadcasting: 3\nI0514 13:53:51.419980 1692 log.go:172] (0xc000862370) Reply frame received for 3\nI0514 13:53:51.420032 1692 log.go:172] (0xc000862370) (0xc000712000) Create stream\nI0514 13:53:51.420055 1692 log.go:172] (0xc000862370) (0xc000712000) Stream added, broadcasting: 5\nI0514 13:53:51.421038 1692 log.go:172] (0xc000862370) Reply frame received for 5\nI0514 13:53:51.519517 1692 log.go:172] (0xc000862370) Data frame received for 5\nI0514 13:53:51.519563 1692 log.go:172] (0xc000712000) (5) Data frame handling\nI0514 13:53:51.519591 1692 log.go:172] (0xc000712000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0514 13:53:51.520162 1692 log.go:172] (0xc000862370) Data frame received for 5\nI0514 13:53:51.520195 1692 log.go:172] (0xc000712000) (5) Data frame handling\nI0514 13:53:51.520221 1692 log.go:172] (0xc000862370) Data frame received for 3\nI0514 13:53:51.520237 1692 log.go:172] (0xc000700000) (3) Data frame handling\nI0514 13:53:51.520251 1692 log.go:172] (0xc000700000) (3) Data frame sent\nI0514 13:53:51.520271 1692 log.go:172] (0xc000862370) Data frame received for 3\nI0514 13:53:51.520284 1692 log.go:172] (0xc000700000) (3) Data frame handling\nI0514 13:53:51.521566 1692 log.go:172] (0xc000862370) Data frame received for 1\nI0514 13:53:51.521586 1692 log.go:172] (0xc000428820) (1) Data frame handling\nI0514 13:53:51.521619 1692 log.go:172] (0xc000428820) (1) Data frame sent\nI0514 13:53:51.521685 1692 log.go:172] (0xc000862370) (0xc000428820) Stream removed, broadcasting: 1\nI0514 13:53:51.521857 1692 log.go:172] (0xc000862370) Go away received\nI0514 13:53:51.522016 1692 log.go:172] (0xc000862370) (0xc000428820) Stream removed, broadcasting: 1\nI0514 13:53:51.522034 1692 log.go:172] (0xc000862370) (0xc000700000) Stream removed, broadcasting: 3\nI0514 13:53:51.522040 1692 log.go:172] (0xc000862370) (0xc000712000) Stream removed, broadcasting: 5\n" May 14 13:53:51.526: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 14 13:53:51.526: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 14 13:53:51.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5432 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 14 13:53:51.757: INFO: stderr: "I0514 13:53:51.672175 1712 log.go:172] (0xc000930420) (0xc0002d4820) Create stream\nI0514 13:53:51.672269 1712 log.go:172] (0xc000930420) (0xc0002d4820) Stream added, broadcasting: 1\nI0514 13:53:51.678079 1712 log.go:172] (0xc000930420) Reply frame received for 1\nI0514 13:53:51.678419 1712 log.go:172] (0xc000930420) (0xc0006fc000) Create stream\nI0514 13:53:51.678442 1712 log.go:172] (0xc000930420) (0xc0006fc000) Stream added, broadcasting: 3\nI0514 13:53:51.680437 1712 log.go:172] (0xc000930420) Reply frame received for 3\nI0514 13:53:51.680475 1712 log.go:172] (0xc000930420) (0xc0005fc3c0) Create stream\nI0514 13:53:51.680487 1712 log.go:172] (0xc000930420) (0xc0005fc3c0) Stream added, broadcasting: 5\nI0514 13:53:51.681253 1712 log.go:172] (0xc000930420) Reply frame received for 5\nI0514 13:53:51.750408 1712 log.go:172] (0xc000930420) Data frame received for 3\nI0514 13:53:51.750460 1712 log.go:172] (0xc0006fc000) (3) Data frame handling\nI0514 13:53:51.750484 1712 log.go:172] (0xc0006fc000) (3) Data frame sent\nI0514 13:53:51.750495 1712 log.go:172] (0xc000930420) Data frame received for 3\nI0514 13:53:51.750503 1712 log.go:172] (0xc0006fc000) (3) Data frame handling\nI0514 13:53:51.750547 1712 log.go:172] (0xc000930420) Data frame received for 5\nI0514 13:53:51.750563 1712 log.go:172] (0xc0005fc3c0) (5) Data frame handling\nI0514 13:53:51.750582 1712 log.go:172] (0xc0005fc3c0) (5) Data frame sent\nI0514 13:53:51.750592 1712 log.go:172] (0xc000930420) Data frame received for 5\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0514 13:53:51.750611 1712 log.go:172] (0xc0005fc3c0) (5) Data frame handling\nI0514 13:53:51.752211 1712 log.go:172] (0xc000930420) Data frame received for 1\nI0514 13:53:51.752260 1712 log.go:172] (0xc0002d4820) (1) Data frame handling\nI0514 13:53:51.752287 1712 log.go:172] (0xc0002d4820) (1) Data frame sent\nI0514 13:53:51.752311 1712 log.go:172] (0xc000930420) (0xc0002d4820) Stream removed, broadcasting: 1\nI0514 13:53:51.752513 1712 log.go:172] (0xc000930420) Go away received\nI0514 13:53:51.752818 1712 log.go:172] (0xc000930420) (0xc0002d4820) Stream removed, broadcasting: 1\nI0514 13:53:51.752847 1712 log.go:172] (0xc000930420) (0xc0006fc000) Stream removed, broadcasting: 3\nI0514 13:53:51.752866 1712 log.go:172] (0xc000930420) (0xc0005fc3c0) Stream removed, broadcasting: 5\n" May 14 13:53:51.757: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 14 13:53:51.757: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 14 13:53:51.757: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5432 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 14 13:53:51.978: INFO: stderr: "I0514 13:53:51.890365 1733 log.go:172] (0xc000116c60) (0xc00043e780) Create stream\nI0514 13:53:51.890427 1733 log.go:172] (0xc000116c60) (0xc00043e780) Stream added, broadcasting: 1\nI0514 13:53:51.893310 1733 log.go:172] (0xc000116c60) Reply frame received for 1\nI0514 13:53:51.893344 1733 log.go:172] (0xc000116c60) (0xc0008fc000) Create stream\nI0514 13:53:51.893358 1733 log.go:172] (0xc000116c60) (0xc0008fc000) Stream added, broadcasting: 3\nI0514 13:53:51.894233 1733 log.go:172] (0xc000116c60) Reply frame received for 3\nI0514 13:53:51.894262 1733 log.go:172] (0xc000116c60) (0xc00043e820) Create stream\nI0514 13:53:51.894271 1733 log.go:172] (0xc000116c60) (0xc00043e820) Stream added, broadcasting: 5\nI0514 13:53:51.895119 1733 log.go:172] (0xc000116c60) Reply frame received for 5\nI0514 13:53:51.971624 1733 log.go:172] (0xc000116c60) Data frame received for 3\nI0514 13:53:51.971671 1733 log.go:172] (0xc0008fc000) (3) Data frame handling\nI0514 13:53:51.971684 1733 log.go:172] (0xc0008fc000) (3) Data frame sent\nI0514 13:53:51.971692 1733 log.go:172] (0xc000116c60) Data frame received for 3\nI0514 13:53:51.971700 1733 log.go:172] (0xc0008fc000) (3) Data frame handling\nI0514 13:53:51.971731 1733 log.go:172] (0xc000116c60) Data frame received for 5\nI0514 13:53:51.971742 1733 log.go:172] (0xc00043e820) (5) Data frame handling\nI0514 13:53:51.971757 1733 log.go:172] (0xc00043e820) (5) Data frame sent\nI0514 13:53:51.971767 1733 log.go:172] (0xc000116c60) Data frame received for 5\nI0514 13:53:51.971774 1733 log.go:172] (0xc00043e820) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0514 13:53:51.973347 1733 log.go:172] (0xc000116c60) Data frame received for 1\nI0514 13:53:51.973379 1733 log.go:172] (0xc00043e780) (1) Data frame handling\nI0514 13:53:51.973398 1733 log.go:172] (0xc00043e780) (1) Data frame sent\nI0514 13:53:51.973417 1733 log.go:172] (0xc000116c60) (0xc00043e780) Stream removed, broadcasting: 1\nI0514 13:53:51.973449 1733 log.go:172] (0xc000116c60) Go away received\nI0514 13:53:51.973906 1733 log.go:172] (0xc000116c60) (0xc00043e780) Stream removed, broadcasting: 1\nI0514 13:53:51.973948 1733 log.go:172] (0xc000116c60) (0xc0008fc000) Stream removed, broadcasting: 3\nI0514 13:53:51.973968 1733 log.go:172] (0xc000116c60) (0xc00043e820) Stream removed, broadcasting: 5\n" May 14 13:53:51.978: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 14 13:53:51.978: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 14 13:53:51.978: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 May 14 13:54:31.994: INFO: Deleting all statefulset in ns statefulset-5432 May 14 13:54:31.998: INFO: Scaling statefulset ss to 0 May 14 13:54:32.007: INFO: Waiting for statefulset status.replicas updated to 0 May 14 13:54:32.009: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:54:32.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5432" for this suite. May 14 13:54:40.043: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:54:40.113: INFO: namespace statefulset-5432 deletion completed in 8.092786425s • [SLOW TEST:113.137 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:54:40.113: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 14 13:54:41.034: INFO: Pod name wrapped-volume-race-f2437d48-abee-4ad8-ae3b-5184e551022b: Found 0 pods out of 5 May 14 13:54:46.042: INFO: Pod name wrapped-volume-race-f2437d48-abee-4ad8-ae3b-5184e551022b: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-f2437d48-abee-4ad8-ae3b-5184e551022b in namespace emptydir-wrapper-1454, will wait for the garbage collector to delete the pods May 14 13:55:00.141: INFO: Deleting ReplicationController wrapped-volume-race-f2437d48-abee-4ad8-ae3b-5184e551022b took: 28.727662ms May 14 13:55:00.441: INFO: Terminating ReplicationController wrapped-volume-race-f2437d48-abee-4ad8-ae3b-5184e551022b pods took: 300.211078ms STEP: Creating RC which spawns configmap-volume pods May 14 13:55:42.384: INFO: Pod name wrapped-volume-race-48ccffff-0f07-430a-b43a-6154a27316c4: Found 0 pods out of 5 May 14 13:55:47.391: INFO: Pod name wrapped-volume-race-48ccffff-0f07-430a-b43a-6154a27316c4: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-48ccffff-0f07-430a-b43a-6154a27316c4 in namespace emptydir-wrapper-1454, will wait for the garbage collector to delete the pods May 14 13:56:01.471: INFO: Deleting ReplicationController wrapped-volume-race-48ccffff-0f07-430a-b43a-6154a27316c4 took: 12.209582ms May 14 13:56:01.872: INFO: Terminating ReplicationController wrapped-volume-race-48ccffff-0f07-430a-b43a-6154a27316c4 pods took: 400.189416ms STEP: Creating RC which spawns configmap-volume pods May 14 13:56:43.360: INFO: Pod name wrapped-volume-race-f8dcfc7d-92b1-4464-8990-f75271ea8a6f: Found 0 pods out of 5 May 14 13:56:48.369: INFO: Pod name wrapped-volume-race-f8dcfc7d-92b1-4464-8990-f75271ea8a6f: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-f8dcfc7d-92b1-4464-8990-f75271ea8a6f in namespace emptydir-wrapper-1454, will wait for the garbage collector to delete the pods May 14 13:57:02.448: INFO: Deleting ReplicationController wrapped-volume-race-f8dcfc7d-92b1-4464-8990-f75271ea8a6f took: 8.041924ms May 14 13:57:02.848: INFO: Terminating ReplicationController wrapped-volume-race-f8dcfc7d-92b1-4464-8990-f75271ea8a6f pods took: 400.289995ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:57:43.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-1454" for this suite. May 14 13:57:53.911: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:57:54.032: INFO: namespace emptydir-wrapper-1454 deletion completed in 10.153401801s • [SLOW TEST:193.919 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:57:54.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted May 14 13:58:01.658: INFO: 9 pods remaining May 14 13:58:01.658: INFO: 0 pods has nil DeletionTimestamp May 14 13:58:01.658: INFO: May 14 13:58:02.326: INFO: 0 pods remaining May 14 13:58:02.326: INFO: 0 pods has nil DeletionTimestamp May 14 13:58:02.326: INFO: May 14 13:58:03.343: INFO: 0 pods remaining May 14 13:58:03.343: INFO: 0 pods has nil DeletionTimestamp May 14 13:58:03.343: INFO: STEP: Gathering metrics W0514 13:58:05.272622 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 14 13:58:05.272: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:58:05.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4307" for this suite. May 14 13:58:13.341: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:58:13.424: INFO: namespace gc-4307 deletion completed in 8.147935194s • [SLOW TEST:19.390 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:58:13.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 14 13:58:13.520: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f4fde2d0-c799-4f3e-a9be-2f4c7ee52cd6" in namespace "projected-6374" to be "success or failure" May 14 13:58:13.531: INFO: Pod "downwardapi-volume-f4fde2d0-c799-4f3e-a9be-2f4c7ee52cd6": Phase="Pending", Reason="", readiness=false. Elapsed: 11.497463ms May 14 13:58:15.535: INFO: Pod "downwardapi-volume-f4fde2d0-c799-4f3e-a9be-2f4c7ee52cd6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014937623s May 14 13:58:17.550: INFO: Pod "downwardapi-volume-f4fde2d0-c799-4f3e-a9be-2f4c7ee52cd6": Phase="Running", Reason="", readiness=true. Elapsed: 4.030199405s May 14 13:58:19.553: INFO: Pod "downwardapi-volume-f4fde2d0-c799-4f3e-a9be-2f4c7ee52cd6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.033693371s STEP: Saw pod success May 14 13:58:19.553: INFO: Pod "downwardapi-volume-f4fde2d0-c799-4f3e-a9be-2f4c7ee52cd6" satisfied condition "success or failure" May 14 13:58:19.556: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-f4fde2d0-c799-4f3e-a9be-2f4c7ee52cd6 container client-container: STEP: delete the pod May 14 13:58:19.575: INFO: Waiting for pod downwardapi-volume-f4fde2d0-c799-4f3e-a9be-2f4c7ee52cd6 to disappear May 14 13:58:19.579: INFO: Pod downwardapi-volume-f4fde2d0-c799-4f3e-a9be-2f4c7ee52cd6 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:58:19.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6374" for this suite. May 14 13:58:25.670: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:58:25.778: INFO: namespace projected-6374 deletion completed in 6.195427816s • [SLOW TEST:12.354 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:58:25.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:58:29.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7615" for this suite. May 14 13:59:13.909: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:59:14.010: INFO: namespace kubelet-test-7615 deletion completed in 44.149700482s • [SLOW TEST:48.232 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:59:14.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 14 13:59:14.089: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 13:59:14.094: INFO: Number of nodes with available pods: 0 May 14 13:59:14.094: INFO: Node iruya-worker is running more than one daemon pod May 14 13:59:15.099: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 13:59:15.103: INFO: Number of nodes with available pods: 0 May 14 13:59:15.103: INFO: Node iruya-worker is running more than one daemon pod May 14 13:59:16.385: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 13:59:16.388: INFO: Number of nodes with available pods: 0 May 14 13:59:16.388: INFO: Node iruya-worker is running more than one daemon pod May 14 13:59:17.100: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 13:59:17.104: INFO: Number of nodes with available pods: 0 May 14 13:59:17.104: INFO: Node iruya-worker is running more than one daemon pod May 14 13:59:18.134: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 13:59:18.167: INFO: Number of nodes with available pods: 0 May 14 13:59:18.167: INFO: Node iruya-worker is running more than one daemon pod May 14 13:59:19.122: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 13:59:19.125: INFO: Number of nodes with available pods: 2 May 14 13:59:19.125: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 14 13:59:19.155: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 13:59:19.158: INFO: Number of nodes with available pods: 1 May 14 13:59:19.158: INFO: Node iruya-worker is running more than one daemon pod May 14 13:59:20.162: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 13:59:20.165: INFO: Number of nodes with available pods: 1 May 14 13:59:20.165: INFO: Node iruya-worker is running more than one daemon pod May 14 13:59:21.162: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 13:59:21.164: INFO: Number of nodes with available pods: 1 May 14 13:59:21.164: INFO: Node iruya-worker is running more than one daemon pod May 14 13:59:22.186: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 13:59:22.190: INFO: Number of nodes with available pods: 1 May 14 13:59:22.190: INFO: Node iruya-worker is running more than one daemon pod May 14 13:59:23.162: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 13:59:23.164: INFO: Number of nodes with available pods: 1 May 14 13:59:23.164: INFO: Node iruya-worker is running more than one daemon pod May 14 13:59:24.163: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 13:59:24.247: INFO: Number of nodes with available pods: 1 May 14 13:59:24.247: INFO: Node iruya-worker is running more than one daemon pod May 14 13:59:25.163: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 13:59:25.166: INFO: Number of nodes with available pods: 1 May 14 13:59:25.166: INFO: Node iruya-worker is running more than one daemon pod May 14 13:59:26.164: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 13:59:26.168: INFO: Number of nodes with available pods: 1 May 14 13:59:26.168: INFO: Node iruya-worker is running more than one daemon pod May 14 13:59:27.163: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 13:59:27.167: INFO: Number of nodes with available pods: 1 May 14 13:59:27.167: INFO: Node iruya-worker is running more than one daemon pod May 14 13:59:28.163: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 13:59:28.167: INFO: Number of nodes with available pods: 1 May 14 13:59:28.167: INFO: Node iruya-worker is running more than one daemon pod May 14 13:59:29.163: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 13:59:29.166: INFO: Number of nodes with available pods: 1 May 14 13:59:29.166: INFO: Node iruya-worker is running more than one daemon pod May 14 13:59:30.163: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 13:59:30.169: INFO: Number of nodes with available pods: 1 May 14 13:59:30.169: INFO: Node iruya-worker is running more than one daemon pod May 14 13:59:31.164: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 13:59:31.167: INFO: Number of nodes with available pods: 1 May 14 13:59:31.167: INFO: Node iruya-worker is running more than one daemon pod May 14 13:59:32.183: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 13:59:32.195: INFO: Number of nodes with available pods: 1 May 14 13:59:32.195: INFO: Node iruya-worker is running more than one daemon pod May 14 13:59:33.162: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 13:59:33.165: INFO: Number of nodes with available pods: 1 May 14 13:59:33.165: INFO: Node iruya-worker is running more than one daemon pod May 14 13:59:34.170: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 13:59:34.173: INFO: Number of nodes with available pods: 1 May 14 13:59:34.173: INFO: Node iruya-worker is running more than one daemon pod May 14 13:59:35.163: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 13:59:35.166: INFO: Number of nodes with available pods: 1 May 14 13:59:35.166: INFO: Node iruya-worker is running more than one daemon pod May 14 13:59:36.162: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 13:59:36.166: INFO: Number of nodes with available pods: 2 May 14 13:59:36.166: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1736, will wait for the garbage collector to delete the pods May 14 13:59:36.231: INFO: Deleting DaemonSet.extensions daemon-set took: 9.847267ms May 14 13:59:36.531: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.271581ms May 14 13:59:42.235: INFO: Number of nodes with available pods: 0 May 14 13:59:42.235: INFO: Number of running nodes: 0, number of available pods: 0 May 14 13:59:42.238: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1736/daemonsets","resourceVersion":"10865557"},"items":null} May 14 13:59:42.241: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1736/pods","resourceVersion":"10865557"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 13:59:42.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1736" for this suite. May 14 13:59:50.272: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:59:50.349: INFO: namespace daemonsets-1736 deletion completed in 8.093600959s • [SLOW TEST:36.339 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 13:59:50.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 14 13:59:50.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-127' May 14 13:59:50.505: INFO: stderr: "" May 14 13:59:50.505: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created May 14 13:59:55.556: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-127 -o json' May 14 13:59:55.655: INFO: stderr: "" May 14 13:59:55.655: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-05-14T13:59:50Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"kubectl-127\",\n \"resourceVersion\": \"10865614\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-127/pods/e2e-test-nginx-pod\",\n \"uid\": \"4c929cb6-f5f4-4245-b656-1b71afebafc6\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-l6xzb\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"iruya-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-l6xzb\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-l6xzb\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-14T13:59:50Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-14T13:59:53Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-14T13:59:53Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-14T13:59:50Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://dbe28287be8b3969e85b84837568bd4553b23ff939b72ddcdae31bcad869c083\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-05-14T13:59:53Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.5\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.104\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-05-14T13:59:50Z\"\n }\n}\n" STEP: replace the image in the pod May 14 13:59:55.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-127' May 14 13:59:55.893: INFO: stderr: "" May 14 13:59:55.893: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726 May 14 13:59:55.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-127' May 14 14:00:01.931: INFO: stderr: "" May 14 14:00:01.931: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:00:01.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-127" for this suite. May 14 14:00:07.978: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:00:08.060: INFO: namespace kubectl-127 deletion completed in 6.099784129s • [SLOW TEST:17.711 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:00:08.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0514 14:00:20.914721 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 14 14:00:20.914: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:00:20.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3988" for this suite. May 14 14:00:29.037: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:00:29.107: INFO: namespace gc-3988 deletion completed in 8.188914586s • [SLOW TEST:21.046 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:00:29.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-131dadeb-ce85-40a5-90fd-caafd5ca428d STEP: Creating a pod to test consume secrets May 14 14:00:29.483: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-58b80ad2-ebca-46c7-9aa1-95e2b46feedb" in namespace "projected-6341" to be "success or failure" May 14 14:00:29.495: INFO: Pod "pod-projected-secrets-58b80ad2-ebca-46c7-9aa1-95e2b46feedb": Phase="Pending", Reason="", readiness=false. Elapsed: 11.81704ms May 14 14:00:31.540: INFO: Pod "pod-projected-secrets-58b80ad2-ebca-46c7-9aa1-95e2b46feedb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057356767s May 14 14:00:33.544: INFO: Pod "pod-projected-secrets-58b80ad2-ebca-46c7-9aa1-95e2b46feedb": Phase="Running", Reason="", readiness=true. Elapsed: 4.060537174s May 14 14:00:35.548: INFO: Pod "pod-projected-secrets-58b80ad2-ebca-46c7-9aa1-95e2b46feedb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.064523093s STEP: Saw pod success May 14 14:00:35.548: INFO: Pod "pod-projected-secrets-58b80ad2-ebca-46c7-9aa1-95e2b46feedb" satisfied condition "success or failure" May 14 14:00:35.550: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-58b80ad2-ebca-46c7-9aa1-95e2b46feedb container projected-secret-volume-test: STEP: delete the pod May 14 14:00:35.674: INFO: Waiting for pod pod-projected-secrets-58b80ad2-ebca-46c7-9aa1-95e2b46feedb to disappear May 14 14:00:35.688: INFO: Pod pod-projected-secrets-58b80ad2-ebca-46c7-9aa1-95e2b46feedb no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:00:35.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6341" for this suite. May 14 14:00:41.723: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:00:41.793: INFO: namespace projected-6341 deletion completed in 6.101640024s • [SLOW TEST:12.686 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:00:41.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1351.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1351.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 14 14:00:47.914: INFO: DNS probes using dns-1351/dns-test-4c2298f0-b6ca-4170-aff1-51c3371341eb succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:00:48.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1351" for this suite. May 14 14:00:54.110: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:00:54.202: INFO: namespace dns-1351 deletion completed in 6.113168103s • [SLOW TEST:12.409 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:00:54.203: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-30243b06-d66c-44ab-834d-e95d90d5f660 in namespace container-probe-6463 May 14 14:00:58.318: INFO: Started pod busybox-30243b06-d66c-44ab-834d-e95d90d5f660 in namespace container-probe-6463 STEP: checking the pod's current state and verifying that restartCount is present May 14 14:00:58.321: INFO: Initial restart count of pod busybox-30243b06-d66c-44ab-834d-e95d90d5f660 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:04:59.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6463" for this suite. May 14 14:05:05.235: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:05:05.318: INFO: namespace container-probe-6463 deletion completed in 6.11139175s • [SLOW TEST:251.116 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:05:05.319: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod May 14 14:05:09.952: INFO: Successfully updated pod "labelsupdate591c5b5e-3e3a-43d6-be98-4484be1f6143" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:05:13.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4095" for this suite. May 14 14:05:35.999: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:05:36.099: INFO: namespace projected-4095 deletion completed in 22.117146373s • [SLOW TEST:30.780 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:05:36.099: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-f0513404-da8d-4ad5-9baa-2038ca68a024 STEP: Creating a pod to test consume secrets May 14 14:05:36.196: INFO: Waiting up to 5m0s for pod "pod-secrets-a50be5e5-dd00-4748-bbf1-dea241b7d41b" in namespace "secrets-3910" to be "success or failure" May 14 14:05:36.206: INFO: Pod "pod-secrets-a50be5e5-dd00-4748-bbf1-dea241b7d41b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.274304ms May 14 14:05:38.209: INFO: Pod "pod-secrets-a50be5e5-dd00-4748-bbf1-dea241b7d41b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013502466s May 14 14:05:40.214: INFO: Pod "pod-secrets-a50be5e5-dd00-4748-bbf1-dea241b7d41b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017780956s May 14 14:05:42.217: INFO: Pod "pod-secrets-a50be5e5-dd00-4748-bbf1-dea241b7d41b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.021392767s STEP: Saw pod success May 14 14:05:42.217: INFO: Pod "pod-secrets-a50be5e5-dd00-4748-bbf1-dea241b7d41b" satisfied condition "success or failure" May 14 14:05:42.220: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-a50be5e5-dd00-4748-bbf1-dea241b7d41b container secret-volume-test: STEP: delete the pod May 14 14:05:42.250: INFO: Waiting for pod pod-secrets-a50be5e5-dd00-4748-bbf1-dea241b7d41b to disappear May 14 14:05:42.278: INFO: Pod pod-secrets-a50be5e5-dd00-4748-bbf1-dea241b7d41b no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:05:42.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3910" for this suite. May 14 14:05:48.319: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:05:48.468: INFO: namespace secrets-3910 deletion completed in 6.186264606s • [SLOW TEST:12.369 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:05:48.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 14 14:05:48.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9541' May 14 14:05:51.664: INFO: stderr: "" May 14 14:05:51.664: INFO: stdout: "replicationcontroller/redis-master created\n" May 14 14:05:51.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9541' May 14 14:05:52.071: INFO: stderr: "" May 14 14:05:52.071: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. May 14 14:05:53.076: INFO: Selector matched 1 pods for map[app:redis] May 14 14:05:53.076: INFO: Found 0 / 1 May 14 14:05:54.076: INFO: Selector matched 1 pods for map[app:redis] May 14 14:05:54.076: INFO: Found 0 / 1 May 14 14:05:55.076: INFO: Selector matched 1 pods for map[app:redis] May 14 14:05:55.076: INFO: Found 0 / 1 May 14 14:05:56.076: INFO: Selector matched 1 pods for map[app:redis] May 14 14:05:56.076: INFO: Found 1 / 1 May 14 14:05:56.076: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 14 14:05:56.079: INFO: Selector matched 1 pods for map[app:redis] May 14 14:05:56.079: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 14 14:05:56.079: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-z4gxg --namespace=kubectl-9541' May 14 14:05:56.188: INFO: stderr: "" May 14 14:05:56.188: INFO: stdout: "Name: redis-master-z4gxg\nNamespace: kubectl-9541\nPriority: 0\nNode: iruya-worker2/172.17.0.5\nStart Time: Thu, 14 May 2020 14:05:51 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.113\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://39f12cfbe91782c46f971befcceca5c0d3816d49e19339d54806d7f2c3fb3971\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Thu, 14 May 2020 14:05:55 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-rcrcd (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-rcrcd:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-rcrcd\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 5s default-scheduler Successfully assigned kubectl-9541/redis-master-z4gxg to iruya-worker2\n Normal Pulled 3s kubelet, iruya-worker2 Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 2s kubelet, iruya-worker2 Created container redis-master\n Normal Started 1s kubelet, iruya-worker2 Started container redis-master\n" May 14 14:05:56.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-9541' May 14 14:05:56.303: INFO: stderr: "" May 14 14:05:56.303: INFO: stdout: "Name: redis-master\nNamespace: kubectl-9541\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 5s replication-controller Created pod: redis-master-z4gxg\n" May 14 14:05:56.303: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-9541' May 14 14:05:56.426: INFO: stderr: "" May 14 14:05:56.426: INFO: stdout: "Name: redis-master\nNamespace: kubectl-9541\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.99.39.139\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.1.113:6379\nSession Affinity: None\nEvents: \n" May 14 14:05:56.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-control-plane' May 14 14:05:56.565: INFO: stderr: "" May 14 14:05:56.565: INFO: stdout: "Name: iruya-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=iruya-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:24:20 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Thu, 14 May 2020 14:05:11 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Thu, 14 May 2020 14:05:11 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Thu, 14 May 2020 14:05:11 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Thu, 14 May 2020 14:05:11 +0000 Sun, 15 Mar 2020 18:25:00 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.7\n Hostname: iruya-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 09f14f6f4d1640fcaab2243401c9f154\n System UUID: 7c6ca533-492e-400c-b058-c282f97a69ec\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.15.7\n Kube-Proxy Version: v1.15.7\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (7 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-iruya-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 59d\n kube-system kindnet-zn8sx 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 59d\n kube-system kube-apiserver-iruya-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 59d\n kube-system kube-controller-manager-iruya-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 59d\n kube-system kube-proxy-46nsr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 59d\n kube-system kube-scheduler-iruya-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 59d\n local-path-storage local-path-provisioner-d4947b89c-72frh 0 (0%) 0 (0%) 0 (0%) 0 (0%) 59d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" May 14 14:05:56.565: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-9541' May 14 14:05:56.682: INFO: stderr: "" May 14 14:05:56.682: INFO: stdout: "Name: kubectl-9541\nLabels: e2e-framework=kubectl\n e2e-run=36f69dbf-5939-4656-8dd2-3f241d0129c0\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:05:56.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9541" for this suite. May 14 14:06:18.738: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:06:18.809: INFO: namespace kubectl-9541 deletion completed in 22.122555889s • [SLOW TEST:30.341 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:06:18.809: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating replication controller my-hostname-basic-54b4d7b9-76e2-407f-892d-9e80ca5371d7 May 14 14:06:18.883: INFO: Pod name my-hostname-basic-54b4d7b9-76e2-407f-892d-9e80ca5371d7: Found 0 pods out of 1 May 14 14:06:23.887: INFO: Pod name my-hostname-basic-54b4d7b9-76e2-407f-892d-9e80ca5371d7: Found 1 pods out of 1 May 14 14:06:23.887: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-54b4d7b9-76e2-407f-892d-9e80ca5371d7" are running May 14 14:06:23.889: INFO: Pod "my-hostname-basic-54b4d7b9-76e2-407f-892d-9e80ca5371d7-m59cv" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-14 14:06:18 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-14 14:06:22 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-14 14:06:22 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-14 14:06:18 +0000 UTC Reason: Message:}]) May 14 14:06:23.890: INFO: Trying to dial the pod May 14 14:06:28.901: INFO: Controller my-hostname-basic-54b4d7b9-76e2-407f-892d-9e80ca5371d7: Got expected result from replica 1 [my-hostname-basic-54b4d7b9-76e2-407f-892d-9e80ca5371d7-m59cv]: "my-hostname-basic-54b4d7b9-76e2-407f-892d-9e80ca5371d7-m59cv", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:06:28.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5992" for this suite. May 14 14:06:34.947: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:06:35.029: INFO: namespace replication-controller-5992 deletion completed in 6.124772977s • [SLOW TEST:16.220 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:06:35.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-a98f7280-22d4-4655-b7aa-3d7049ae9a2c STEP: Creating a pod to test consume secrets May 14 14:06:35.126: INFO: Waiting up to 5m0s for pod "pod-secrets-59b33ce8-3ce4-4962-a539-59a3bdb8a097" in namespace "secrets-509" to be "success or failure" May 14 14:06:35.129: INFO: Pod "pod-secrets-59b33ce8-3ce4-4962-a539-59a3bdb8a097": Phase="Pending", Reason="", readiness=false. Elapsed: 3.448031ms May 14 14:06:37.139: INFO: Pod "pod-secrets-59b33ce8-3ce4-4962-a539-59a3bdb8a097": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012760052s May 14 14:06:39.202: INFO: Pod "pod-secrets-59b33ce8-3ce4-4962-a539-59a3bdb8a097": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076434514s May 14 14:06:41.206: INFO: Pod "pod-secrets-59b33ce8-3ce4-4962-a539-59a3bdb8a097": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.080168419s STEP: Saw pod success May 14 14:06:41.206: INFO: Pod "pod-secrets-59b33ce8-3ce4-4962-a539-59a3bdb8a097" satisfied condition "success or failure" May 14 14:06:41.209: INFO: Trying to get logs from node iruya-worker pod pod-secrets-59b33ce8-3ce4-4962-a539-59a3bdb8a097 container secret-volume-test: STEP: delete the pod May 14 14:06:41.244: INFO: Waiting for pod pod-secrets-59b33ce8-3ce4-4962-a539-59a3bdb8a097 to disappear May 14 14:06:41.334: INFO: Pod pod-secrets-59b33ce8-3ce4-4962-a539-59a3bdb8a097 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:06:41.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-509" for this suite. May 14 14:06:47.377: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:06:47.471: INFO: namespace secrets-509 deletion completed in 6.133954983s • [SLOW TEST:12.442 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:06:47.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9740.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-9740.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9740.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9740.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-9740.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9740.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 14 14:06:55.611: INFO: DNS probes using dns-9740/dns-test-beec7ce2-c168-49f5-86cc-65b77c2e01cf succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:06:55.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9740" for this suite. May 14 14:07:01.813: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:07:01.889: INFO: namespace dns-9740 deletion completed in 6.171585969s • [SLOW TEST:14.418 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:07:01.889: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8488.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8488.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8488.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8488.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8488.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-8488.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8488.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-8488.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8488.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-8488.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8488.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-8488.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8488.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 101.15.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.15.101_udp@PTR;check="$$(dig +tcp +noall +answer +search 101.15.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.15.101_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8488.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8488.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8488.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8488.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8488.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-8488.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8488.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-8488.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8488.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-8488.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8488.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-8488.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8488.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 101.15.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.15.101_udp@PTR;check="$$(dig +tcp +noall +answer +search 101.15.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.15.101_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 14 14:07:10.096: INFO: Unable to read wheezy_udp@dns-test-service.dns-8488.svc.cluster.local from pod dns-8488/dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05: the server could not find the requested resource (get pods dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05) May 14 14:07:10.100: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8488.svc.cluster.local from pod dns-8488/dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05: the server could not find the requested resource (get pods dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05) May 14 14:07:10.104: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8488.svc.cluster.local from pod dns-8488/dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05: the server could not find the requested resource (get pods dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05) May 14 14:07:10.107: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8488.svc.cluster.local from pod dns-8488/dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05: the server could not find the requested resource (get pods dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05) May 14 14:07:10.127: INFO: Unable to read jessie_udp@dns-test-service.dns-8488.svc.cluster.local from pod dns-8488/dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05: the server could not find the requested resource (get pods dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05) May 14 14:07:10.130: INFO: Unable to read jessie_tcp@dns-test-service.dns-8488.svc.cluster.local from pod dns-8488/dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05: the server could not find the requested resource (get pods dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05) May 14 14:07:10.134: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8488.svc.cluster.local from pod dns-8488/dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05: the server could not find the requested resource (get pods dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05) May 14 14:07:10.137: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8488.svc.cluster.local from pod dns-8488/dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05: the server could not find the requested resource (get pods dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05) May 14 14:07:10.156: INFO: Lookups using dns-8488/dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05 failed for: [wheezy_udp@dns-test-service.dns-8488.svc.cluster.local wheezy_tcp@dns-test-service.dns-8488.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8488.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8488.svc.cluster.local jessie_udp@dns-test-service.dns-8488.svc.cluster.local jessie_tcp@dns-test-service.dns-8488.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8488.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8488.svc.cluster.local] May 14 14:07:15.162: INFO: Unable to read wheezy_udp@dns-test-service.dns-8488.svc.cluster.local from pod dns-8488/dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05: the server could not find the requested resource (get pods dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05) May 14 14:07:15.167: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8488.svc.cluster.local from pod dns-8488/dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05: the server could not find the requested resource (get pods dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05) May 14 14:07:15.170: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8488.svc.cluster.local from pod dns-8488/dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05: the server could not find the requested resource (get pods dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05) May 14 14:07:15.173: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8488.svc.cluster.local from pod dns-8488/dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05: the server could not find the requested resource (get pods dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05) May 14 14:07:15.193: INFO: Unable to read jessie_udp@dns-test-service.dns-8488.svc.cluster.local from pod dns-8488/dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05: the server could not find the requested resource (get pods dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05) May 14 14:07:15.196: INFO: Unable to read jessie_tcp@dns-test-service.dns-8488.svc.cluster.local from pod dns-8488/dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05: the server could not find the requested resource (get pods dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05) May 14 14:07:15.199: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8488.svc.cluster.local from pod dns-8488/dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05: the server could not find the requested resource (get pods dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05) May 14 14:07:15.202: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8488.svc.cluster.local from pod dns-8488/dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05: the server could not find the requested resource (get pods dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05) May 14 14:07:15.227: INFO: Lookups using dns-8488/dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05 failed for: [wheezy_udp@dns-test-service.dns-8488.svc.cluster.local wheezy_tcp@dns-test-service.dns-8488.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8488.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8488.svc.cluster.local jessie_udp@dns-test-service.dns-8488.svc.cluster.local jessie_tcp@dns-test-service.dns-8488.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8488.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8488.svc.cluster.local] May 14 14:07:20.161: INFO: Unable to read wheezy_udp@dns-test-service.dns-8488.svc.cluster.local from pod dns-8488/dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05: the server could not find the requested resource (get pods dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05) May 14 14:07:20.164: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8488.svc.cluster.local from pod dns-8488/dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05: the server could not find the requested resource (get pods dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05) May 14 14:07:20.167: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8488.svc.cluster.local from pod dns-8488/dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05: the server could not find the requested resource (get pods dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05) May 14 14:07:20.171: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8488.svc.cluster.local from pod dns-8488/dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05: the server could not find the requested resource (get pods dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05) May 14 14:07:20.200: INFO: Unable to read jessie_udp@dns-test-service.dns-8488.svc.cluster.local from pod dns-8488/dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05: the server could not find the requested resource (get pods dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05) May 14 14:07:20.203: INFO: Unable to read jessie_tcp@dns-test-service.dns-8488.svc.cluster.local from pod dns-8488/dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05: the server could not find the requested resource (get pods dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05) May 14 14:07:20.205: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8488.svc.cluster.local from pod dns-8488/dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05: the server could not find the requested resource (get pods dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05) May 14 14:07:20.207: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8488.svc.cluster.local from pod dns-8488/dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05: the server could not find the requested resource (get pods dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05) May 14 14:07:20.223: INFO: Lookups using dns-8488/dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05 failed for: [wheezy_udp@dns-test-service.dns-8488.svc.cluster.local wheezy_tcp@dns-test-service.dns-8488.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8488.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8488.svc.cluster.local jessie_udp@dns-test-service.dns-8488.svc.cluster.local jessie_tcp@dns-test-service.dns-8488.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8488.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8488.svc.cluster.local] May 14 14:07:25.162: INFO: Unable to read wheezy_udp@dns-test-service.dns-8488.svc.cluster.local from pod dns-8488/dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05: the server could not find the requested resource (get pods dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05) May 14 14:07:25.165: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8488.svc.cluster.local from pod dns-8488/dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05: the server could not find the requested resource (get pods dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05) May 14 14:07:25.168: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8488.svc.cluster.local from pod dns-8488/dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05: the server could not find the requested resource (get pods dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05) May 14 14:07:25.171: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8488.svc.cluster.local from pod dns-8488/dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05: the server could not find the requested resource (get pods dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05) May 14 14:07:25.191: INFO: Unable to read jessie_udp@dns-test-service.dns-8488.svc.cluster.local from pod dns-8488/dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05: the server could not find the requested resource (get pods dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05) May 14 14:07:25.194: INFO: Unable to read jessie_tcp@dns-test-service.dns-8488.svc.cluster.local from pod dns-8488/dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05: the server could not find the requested resource (get pods dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05) May 14 14:07:25.196: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8488.svc.cluster.local from pod dns-8488/dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05: the server could not find the requested resource (get pods dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05) May 14 14:07:25.199: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8488.svc.cluster.local from pod dns-8488/dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05: the server could not find the requested resource (get pods dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05) May 14 14:07:25.217: INFO: Lookups using dns-8488/dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05 failed for: [wheezy_udp@dns-test-service.dns-8488.svc.cluster.local wheezy_tcp@dns-test-service.dns-8488.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8488.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8488.svc.cluster.local jessie_udp@dns-test-service.dns-8488.svc.cluster.local jessie_tcp@dns-test-service.dns-8488.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8488.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8488.svc.cluster.local] May 14 14:07:30.161: INFO: Unable to read wheezy_udp@dns-test-service.dns-8488.svc.cluster.local from pod dns-8488/dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05: the server could not find the requested resource (get pods dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05) May 14 14:07:30.164: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8488.svc.cluster.local from pod dns-8488/dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05: the server could not find the requested resource (get pods dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05) May 14 14:07:30.167: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8488.svc.cluster.local from pod dns-8488/dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05: the server could not find the requested resource (get pods dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05) May 14 14:07:30.170: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8488.svc.cluster.local from pod dns-8488/dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05: the server could not find the requested resource (get pods dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05) May 14 14:07:30.187: INFO: Unable to read jessie_udp@dns-test-service.dns-8488.svc.cluster.local from pod dns-8488/dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05: the server could not find the requested resource (get pods dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05) May 14 14:07:30.189: INFO: Unable to read jessie_tcp@dns-test-service.dns-8488.svc.cluster.local from pod dns-8488/dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05: the server could not find the requested resource (get pods dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05) May 14 14:07:30.191: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8488.svc.cluster.local from pod dns-8488/dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05: the server could not find the requested resource (get pods dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05) May 14 14:07:30.194: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8488.svc.cluster.local from pod dns-8488/dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05: the server could not find the requested resource (get pods dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05) May 14 14:07:30.208: INFO: Lookups using dns-8488/dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05 failed for: [wheezy_udp@dns-test-service.dns-8488.svc.cluster.local wheezy_tcp@dns-test-service.dns-8488.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8488.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8488.svc.cluster.local jessie_udp@dns-test-service.dns-8488.svc.cluster.local jessie_tcp@dns-test-service.dns-8488.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8488.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8488.svc.cluster.local] May 14 14:07:35.163: INFO: Unable to read wheezy_udp@dns-test-service.dns-8488.svc.cluster.local from pod dns-8488/dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05: the server could not find the requested resource (get pods dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05) May 14 14:07:35.166: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8488.svc.cluster.local from pod dns-8488/dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05: the server could not find the requested resource (get pods dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05) May 14 14:07:35.169: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8488.svc.cluster.local from pod dns-8488/dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05: the server could not find the requested resource (get pods dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05) May 14 14:07:35.171: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8488.svc.cluster.local from pod dns-8488/dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05: the server could not find the requested resource (get pods dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05) May 14 14:07:35.187: INFO: Unable to read jessie_udp@dns-test-service.dns-8488.svc.cluster.local from pod dns-8488/dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05: the server could not find the requested resource (get pods dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05) May 14 14:07:35.190: INFO: Unable to read jessie_tcp@dns-test-service.dns-8488.svc.cluster.local from pod dns-8488/dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05: the server could not find the requested resource (get pods dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05) May 14 14:07:35.192: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8488.svc.cluster.local from pod dns-8488/dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05: the server could not find the requested resource (get pods dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05) May 14 14:07:35.195: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8488.svc.cluster.local from pod dns-8488/dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05: the server could not find the requested resource (get pods dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05) May 14 14:07:35.213: INFO: Lookups using dns-8488/dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05 failed for: [wheezy_udp@dns-test-service.dns-8488.svc.cluster.local wheezy_tcp@dns-test-service.dns-8488.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8488.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8488.svc.cluster.local jessie_udp@dns-test-service.dns-8488.svc.cluster.local jessie_tcp@dns-test-service.dns-8488.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8488.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8488.svc.cluster.local] May 14 14:07:40.221: INFO: DNS probes using dns-8488/dns-test-88bcfa09-0947-4d0e-8e0a-e1228e507a05 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:07:41.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8488" for this suite. May 14 14:07:47.733: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:07:47.841: INFO: namespace dns-8488 deletion completed in 6.231210976s • [SLOW TEST:45.952 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:07:47.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs May 14 14:07:47.983: INFO: Waiting up to 5m0s for pod "pod-19788146-beac-4ced-a4d1-cf9da9199136" in namespace "emptydir-7318" to be "success or failure" May 14 14:07:47.999: INFO: Pod "pod-19788146-beac-4ced-a4d1-cf9da9199136": Phase="Pending", Reason="", readiness=false. Elapsed: 15.649533ms May 14 14:07:50.004: INFO: Pod "pod-19788146-beac-4ced-a4d1-cf9da9199136": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020061469s May 14 14:07:52.029: INFO: Pod "pod-19788146-beac-4ced-a4d1-cf9da9199136": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045538362s STEP: Saw pod success May 14 14:07:52.029: INFO: Pod "pod-19788146-beac-4ced-a4d1-cf9da9199136" satisfied condition "success or failure" May 14 14:07:52.032: INFO: Trying to get logs from node iruya-worker2 pod pod-19788146-beac-4ced-a4d1-cf9da9199136 container test-container: STEP: delete the pod May 14 14:07:52.048: INFO: Waiting for pod pod-19788146-beac-4ced-a4d1-cf9da9199136 to disappear May 14 14:07:52.058: INFO: Pod pod-19788146-beac-4ced-a4d1-cf9da9199136 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:07:52.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7318" for this suite. May 14 14:07:58.074: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:07:58.142: INFO: namespace emptydir-7318 deletion completed in 6.081841507s • [SLOW TEST:10.301 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:07:58.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 May 14 14:07:58.214: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Registering the sample API server. May 14 14:07:58.496: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set May 14 14:08:00.843: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725062078, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725062078, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725062078, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725062078, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} May 14 14:08:03.791: INFO: Waited 759.226147ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:08:04.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-9773" for this suite. May 14 14:08:10.310: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:08:10.382: INFO: namespace aggregator-9773 deletion completed in 6.142099005s • [SLOW TEST:12.239 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:08:10.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-81234c2c-f947-42d2-8b95-769670bbbdb2 STEP: Creating a pod to test consume secrets May 14 14:08:10.528: INFO: Waiting up to 5m0s for pod "pod-secrets-a22849f3-a934-463c-a90a-74d3d9396778" in namespace "secrets-4408" to be "success or failure" May 14 14:08:10.532: INFO: Pod "pod-secrets-a22849f3-a934-463c-a90a-74d3d9396778": Phase="Pending", Reason="", readiness=false. Elapsed: 3.98214ms May 14 14:08:12.536: INFO: Pod "pod-secrets-a22849f3-a934-463c-a90a-74d3d9396778": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008594754s May 14 14:08:14.540: INFO: Pod "pod-secrets-a22849f3-a934-463c-a90a-74d3d9396778": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012526936s STEP: Saw pod success May 14 14:08:14.540: INFO: Pod "pod-secrets-a22849f3-a934-463c-a90a-74d3d9396778" satisfied condition "success or failure" May 14 14:08:14.543: INFO: Trying to get logs from node iruya-worker pod pod-secrets-a22849f3-a934-463c-a90a-74d3d9396778 container secret-volume-test: STEP: delete the pod May 14 14:08:14.566: INFO: Waiting for pod pod-secrets-a22849f3-a934-463c-a90a-74d3d9396778 to disappear May 14 14:08:14.569: INFO: Pod pod-secrets-a22849f3-a934-463c-a90a-74d3d9396778 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:08:14.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4408" for this suite. May 14 14:08:20.600: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:08:20.666: INFO: namespace secrets-4408 deletion completed in 6.093784755s STEP: Destroying namespace "secret-namespace-5939" for this suite. May 14 14:08:26.687: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:08:26.767: INFO: namespace secret-namespace-5939 deletion completed in 6.100800243s • [SLOW TEST:16.384 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:08:26.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 14 14:08:26.804: INFO: Creating deployment "nginx-deployment" May 14 14:08:26.832: INFO: Waiting for observed generation 1 May 14 14:08:28.855: INFO: Waiting for all required pods to come up May 14 14:08:28.859: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running May 14 14:08:40.869: INFO: Waiting for deployment "nginx-deployment" to complete May 14 14:08:40.875: INFO: Updating deployment "nginx-deployment" with a non-existent image May 14 14:08:40.882: INFO: Updating deployment nginx-deployment May 14 14:08:40.882: INFO: Waiting for observed generation 2 May 14 14:08:43.067: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 14 14:08:43.557: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 14 14:08:43.559: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas May 14 14:08:43.566: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 14 14:08:43.566: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 14 14:08:43.568: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas May 14 14:08:43.571: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas May 14 14:08:43.571: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 May 14 14:08:43.577: INFO: Updating deployment nginx-deployment May 14 14:08:43.577: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas May 14 14:08:43.970: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 14 14:08:44.115: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 May 14 14:08:46.366: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-3893,SelfLink:/apis/apps/v1/namespaces/deployment-3893/deployments/nginx-deployment,UID:02d0fc03-8409-4d12-917f-b91cdaf00244,ResourceVersion:10867573,Generation:3,CreationTimestamp:2020-05-14 14:08:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2020-05-14 14:08:43 +0000 UTC 2020-05-14 14:08:43 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-05-14 14:08:44 +0000 UTC 2020-05-14 14:08:26 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},} May 14 14:08:46.653: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-3893,SelfLink:/apis/apps/v1/namespaces/deployment-3893/replicasets/nginx-deployment-55fb7cb77f,UID:b879b8a5-46e0-49b2-9a61-1d9b5d52fa4e,ResourceVersion:10867563,Generation:3,CreationTimestamp:2020-05-14 14:08:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 02d0fc03-8409-4d12-917f-b91cdaf00244 0xc002b0ead7 0xc002b0ead8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 14 14:08:46.653: INFO: All old ReplicaSets of Deployment "nginx-deployment": May 14 14:08:46.654: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-3893,SelfLink:/apis/apps/v1/namespaces/deployment-3893/replicasets/nginx-deployment-7b8c6f4498,UID:2f582477-4dab-40d9-9616-03453ae10dcd,ResourceVersion:10867559,Generation:3,CreationTimestamp:2020-05-14 14:08:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 02d0fc03-8409-4d12-917f-b91cdaf00244 0xc002b0eba7 0xc002b0eba8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} May 14 14:08:47.111: INFO: Pod "nginx-deployment-55fb7cb77f-4plvc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-4plvc,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3893,SelfLink:/api/v1/namespaces/deployment-3893/pods/nginx-deployment-55fb7cb77f-4plvc,UID:52201a2a-bfe2-4602-b70c-1b1d5d35bdaf,ResourceVersion:10867587,Generation:0,CreationTimestamp:2020-05-14 14:08:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f b879b8a5-46e0-49b2-9a61-1d9b5d52fa4e 0xc002ad4087 0xc002ad4088}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8wsds {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8wsds,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-8wsds true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ad4100} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ad4120}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:44 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-14 14:08:44 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 14 14:08:47.112: INFO: Pod "nginx-deployment-55fb7cb77f-9vcbw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-9vcbw,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3893,SelfLink:/api/v1/namespaces/deployment-3893/pods/nginx-deployment-55fb7cb77f-9vcbw,UID:c70c3c7a-bba8-4f95-9429-1b4794aa0478,ResourceVersion:10867574,Generation:0,CreationTimestamp:2020-05-14 14:08:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f b879b8a5-46e0-49b2-9a61-1d9b5d52fa4e 0xc002ad41f7 0xc002ad41f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8wsds {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8wsds,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-8wsds true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ad4270} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ad4290}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:44 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-14 14:08:44 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 14 14:08:47.112: INFO: Pod "nginx-deployment-55fb7cb77f-9vskq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-9vskq,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3893,SelfLink:/api/v1/namespaces/deployment-3893/pods/nginx-deployment-55fb7cb77f-9vskq,UID:7e8139fc-96a2-4972-b813-d38d883ad902,ResourceVersion:10867596,Generation:0,CreationTimestamp:2020-05-14 14:08:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f b879b8a5-46e0-49b2-9a61-1d9b5d52fa4e 0xc002ad4367 0xc002ad4368}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8wsds {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8wsds,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-8wsds true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ad43e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ad4400}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:44 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-14 14:08:44 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 14 14:08:47.112: INFO: Pod "nginx-deployment-55fb7cb77f-b4r76" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-b4r76,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3893,SelfLink:/api/v1/namespaces/deployment-3893/pods/nginx-deployment-55fb7cb77f-b4r76,UID:c7f0a4e6-5051-4e7d-81dc-49b4101ea557,ResourceVersion:10867495,Generation:0,CreationTimestamp:2020-05-14 14:08:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f b879b8a5-46e0-49b2-9a61-1d9b5d52fa4e 0xc002ad44d7 0xc002ad44d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8wsds {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8wsds,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-8wsds true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ad4550} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ad4570}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:41 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-14 14:08:41 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 14 14:08:47.112: INFO: Pod "nginx-deployment-55fb7cb77f-bj8nd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-bj8nd,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3893,SelfLink:/api/v1/namespaces/deployment-3893/pods/nginx-deployment-55fb7cb77f-bj8nd,UID:d19a39c7-fbe1-494d-8cf6-b61ef9f65a39,ResourceVersion:10867555,Generation:0,CreationTimestamp:2020-05-14 14:08:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f b879b8a5-46e0-49b2-9a61-1d9b5d52fa4e 0xc002ad4667 0xc002ad4668}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8wsds {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8wsds,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-8wsds true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ad46e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ad4710}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:44 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 14 14:08:47.113: INFO: Pod "nginx-deployment-55fb7cb77f-d9ttg" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-d9ttg,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3893,SelfLink:/api/v1/namespaces/deployment-3893/pods/nginx-deployment-55fb7cb77f-d9ttg,UID:47f95f79-b23c-4b0f-8c80-22fb54cc22b8,ResourceVersion:10867478,Generation:0,CreationTimestamp:2020-05-14 14:08:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f b879b8a5-46e0-49b2-9a61-1d9b5d52fa4e 0xc002ad4797 0xc002ad4798}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8wsds {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8wsds,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-8wsds true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ad4810} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ad4830}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:40 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-14 14:08:41 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 14 14:08:47.113: INFO: Pod "nginx-deployment-55fb7cb77f-dznrs" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-dznrs,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3893,SelfLink:/api/v1/namespaces/deployment-3893/pods/nginx-deployment-55fb7cb77f-dznrs,UID:c52896d8-3ce6-43ae-8271-93d89b535df6,ResourceVersion:10867620,Generation:0,CreationTimestamp:2020-05-14 14:08:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f b879b8a5-46e0-49b2-9a61-1d9b5d52fa4e 0xc002ad4907 0xc002ad4908}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8wsds {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8wsds,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-8wsds true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ad4980} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ad49a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:40 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.121,StartTime:2020-05-14 14:08:40 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = NotFound desc = failed to pull and unpack image "docker.io/library/nginx:404": failed to resolve reference "docker.io/library/nginx:404": docker.io/library/nginx:404: not found,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 14 14:08:47.113: INFO: Pod "nginx-deployment-55fb7cb77f-fgnwd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-fgnwd,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3893,SelfLink:/api/v1/namespaces/deployment-3893/pods/nginx-deployment-55fb7cb77f-fgnwd,UID:de02d499-9f43-4afe-8808-8f46bdadc3a3,ResourceVersion:10867608,Generation:0,CreationTimestamp:2020-05-14 14:08:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f b879b8a5-46e0-49b2-9a61-1d9b5d52fa4e 0xc002ad4a97 0xc002ad4a98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8wsds {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8wsds,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-8wsds true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ad4b10} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ad4b30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:44 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-14 14:08:44 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 14 14:08:47.114: INFO: Pod "nginx-deployment-55fb7cb77f-hxtdd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-hxtdd,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3893,SelfLink:/api/v1/namespaces/deployment-3893/pods/nginx-deployment-55fb7cb77f-hxtdd,UID:47a5f2ec-43ae-461c-a969-5968471f2364,ResourceVersion:10867496,Generation:0,CreationTimestamp:2020-05-14 14:08:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f b879b8a5-46e0-49b2-9a61-1d9b5d52fa4e 0xc002ad4c07 0xc002ad4c08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8wsds {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8wsds,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-8wsds true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ad4c80} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ad4ca0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:41 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-14 14:08:41 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 14 14:08:47.114: INFO: Pod "nginx-deployment-55fb7cb77f-jcsz4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-jcsz4,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3893,SelfLink:/api/v1/namespaces/deployment-3893/pods/nginx-deployment-55fb7cb77f-jcsz4,UID:eb6a8d9b-1040-4419-90b6-8950df806afd,ResourceVersion:10867558,Generation:0,CreationTimestamp:2020-05-14 14:08:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f b879b8a5-46e0-49b2-9a61-1d9b5d52fa4e 0xc002ad4d77 0xc002ad4d78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8wsds {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8wsds,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-8wsds true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ad4df0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ad4e10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:44 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 14 14:08:47.114: INFO: Pod "nginx-deployment-55fb7cb77f-qvm7l" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-qvm7l,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3893,SelfLink:/api/v1/namespaces/deployment-3893/pods/nginx-deployment-55fb7cb77f-qvm7l,UID:6cb8fcbc-0385-4bd1-828c-c9baaedf62e0,ResourceVersion:10867610,Generation:0,CreationTimestamp:2020-05-14 14:08:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f b879b8a5-46e0-49b2-9a61-1d9b5d52fa4e 0xc002ad4e97 0xc002ad4e98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8wsds {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8wsds,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-8wsds true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ad4f10} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ad4f30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:44 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-14 14:08:44 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 14 14:08:47.114: INFO: Pod "nginx-deployment-55fb7cb77f-w7d96" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-w7d96,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3893,SelfLink:/api/v1/namespaces/deployment-3893/pods/nginx-deployment-55fb7cb77f-w7d96,UID:a36c2a39-d860-41f3-9949-b7c4fda3aa4f,ResourceVersion:10867579,Generation:0,CreationTimestamp:2020-05-14 14:08:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f b879b8a5-46e0-49b2-9a61-1d9b5d52fa4e 0xc002ad5007 0xc002ad5008}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8wsds {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8wsds,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-8wsds true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ad5080} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ad50a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:43 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-14 14:08:44 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 14 14:08:47.114: INFO: Pod "nginx-deployment-55fb7cb77f-xm9rl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-xm9rl,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3893,SelfLink:/api/v1/namespaces/deployment-3893/pods/nginx-deployment-55fb7cb77f-xm9rl,UID:297ff7ec-d427-4ca5-8294-8de82e7db8da,ResourceVersion:10867470,Generation:0,CreationTimestamp:2020-05-14 14:08:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f b879b8a5-46e0-49b2-9a61-1d9b5d52fa4e 0xc002ad5177 0xc002ad5178}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8wsds {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8wsds,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-8wsds true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ad51f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ad5210}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:40 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-14 14:08:41 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 14 14:08:47.115: INFO: Pod "nginx-deployment-7b8c6f4498-2xt7b" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-2xt7b,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3893,SelfLink:/api/v1/namespaces/deployment-3893/pods/nginx-deployment-7b8c6f4498-2xt7b,UID:28bfb9ee-9f99-48d1-968d-a710ecad619c,ResourceVersion:10867603,Generation:0,CreationTimestamp:2020-05-14 14:08:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2f582477-4dab-40d9-9616-03453ae10dcd 0xc002ad52e7 0xc002ad52e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8wsds {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8wsds,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-8wsds true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ad5360} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ad5380}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:44 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-14 14:08:44 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 14 14:08:47.115: INFO: Pod "nginx-deployment-7b8c6f4498-5vg68" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-5vg68,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3893,SelfLink:/api/v1/namespaces/deployment-3893/pods/nginx-deployment-7b8c6f4498-5vg68,UID:81678b16-1a7b-41a8-a38e-cbb767a1bac1,ResourceVersion:10867421,Generation:0,CreationTimestamp:2020-05-14 14:08:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2f582477-4dab-40d9-9616-03453ae10dcd 0xc002ad5447 0xc002ad5448}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8wsds {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8wsds,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-8wsds true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ad54c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ad54e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:27 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:37 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:26 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.190,StartTime:2020-05-14 14:08:27 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-14 14:08:36 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://b5dbfec5352b3c51479fe1eb11e8ddfae38bcf11632d4e27bf3a9e9979de8845}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 14 14:08:47.115: INFO: Pod "nginx-deployment-7b8c6f4498-85b7x" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-85b7x,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3893,SelfLink:/api/v1/namespaces/deployment-3893/pods/nginx-deployment-7b8c6f4498-85b7x,UID:36391d09-54b4-4964-abc1-84d8a82b5585,ResourceVersion:10867571,Generation:0,CreationTimestamp:2020-05-14 14:08:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2f582477-4dab-40d9-9616-03453ae10dcd 0xc002ad55b7 0xc002ad55b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8wsds {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8wsds,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-8wsds true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ad5630} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ad5650}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:43 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-14 14:08:44 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 14 14:08:47.115: INFO: Pod "nginx-deployment-7b8c6f4498-89qnt" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-89qnt,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3893,SelfLink:/api/v1/namespaces/deployment-3893/pods/nginx-deployment-7b8c6f4498-89qnt,UID:9462578e-f72a-43b4-bddb-d3df279a4050,ResourceVersion:10867628,Generation:0,CreationTimestamp:2020-05-14 14:08:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2f582477-4dab-40d9-9616-03453ae10dcd 0xc002ad5717 0xc002ad5718}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8wsds {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8wsds,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-8wsds true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ad5790} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ad57b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:44 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-14 14:08:44 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 14 14:08:47.115: INFO: Pod "nginx-deployment-7b8c6f4498-8h78z" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-8h78z,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3893,SelfLink:/api/v1/namespaces/deployment-3893/pods/nginx-deployment-7b8c6f4498-8h78z,UID:3579cb92-3c20-4747-a5e5-2bd6e86a4384,ResourceVersion:10867606,Generation:0,CreationTimestamp:2020-05-14 14:08:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2f582477-4dab-40d9-9616-03453ae10dcd 0xc002ad5877 0xc002ad5878}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8wsds {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8wsds,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-8wsds true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ad58f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ad5910}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:44 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-14 14:08:44 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 14 14:08:47.116: INFO: Pod "nginx-deployment-7b8c6f4498-9847r" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-9847r,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3893,SelfLink:/api/v1/namespaces/deployment-3893/pods/nginx-deployment-7b8c6f4498-9847r,UID:62130328-ce3a-4013-ae0e-e0484826ad0c,ResourceVersion:10867631,Generation:0,CreationTimestamp:2020-05-14 14:08:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2f582477-4dab-40d9-9616-03453ae10dcd 0xc002ad59d7 0xc002ad59d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8wsds {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8wsds,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-8wsds true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ad5a50} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ad5a70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:44 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-14 14:08:44 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 14 14:08:47.116: INFO: Pod "nginx-deployment-7b8c6f4498-9nnvl" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-9nnvl,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3893,SelfLink:/api/v1/namespaces/deployment-3893/pods/nginx-deployment-7b8c6f4498-9nnvl,UID:9355a7d0-763b-4d0a-98d0-681d87196689,ResourceVersion:10867411,Generation:0,CreationTimestamp:2020-05-14 14:08:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2f582477-4dab-40d9-9616-03453ae10dcd 0xc002ad5b37 0xc002ad5b38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8wsds {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8wsds,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-8wsds true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ad5bb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ad5bd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:26 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:36 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:36 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:26 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.188,StartTime:2020-05-14 14:08:26 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-14 14:08:35 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://ae496e1ac84a497bb065699d5340ddffb70c669c232fd368c642363eaec2f2c6}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 14 14:08:47.116: INFO: Pod "nginx-deployment-7b8c6f4498-bgg4g" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-bgg4g,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3893,SelfLink:/api/v1/namespaces/deployment-3893/pods/nginx-deployment-7b8c6f4498-bgg4g,UID:2567bccb-026f-4835-b2f8-7139fc7f25d3,ResourceVersion:10867615,Generation:0,CreationTimestamp:2020-05-14 14:08:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2f582477-4dab-40d9-9616-03453ae10dcd 0xc002ad5ca7 0xc002ad5ca8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8wsds {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8wsds,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-8wsds true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ad5d20} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ad5d40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:44 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-14 14:08:44 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 14 14:08:47.116: INFO: Pod "nginx-deployment-7b8c6f4498-db2qj" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-db2qj,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3893,SelfLink:/api/v1/namespaces/deployment-3893/pods/nginx-deployment-7b8c6f4498-db2qj,UID:9d5a15ab-16b8-40cd-bfcb-ea55fd1bf802,ResourceVersion:10867383,Generation:0,CreationTimestamp:2020-05-14 14:08:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2f582477-4dab-40d9-9616-03453ae10dcd 0xc002ad5e07 0xc002ad5e08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8wsds {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8wsds,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-8wsds true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ad5e80} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ad5ea0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:26 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:33 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:33 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:26 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.117,StartTime:2020-05-14 14:08:26 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-14 14:08:32 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://0c217cba1e8bbab748fbf3f5e6986d160534d168b03e81e854134ef04159a7dd}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 14 14:08:47.116: INFO: Pod "nginx-deployment-7b8c6f4498-f86xq" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-f86xq,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3893,SelfLink:/api/v1/namespaces/deployment-3893/pods/nginx-deployment-7b8c6f4498-f86xq,UID:211ee380-640d-4a50-a4d8-92bb253b979b,ResourceVersion:10867422,Generation:0,CreationTimestamp:2020-05-14 14:08:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2f582477-4dab-40d9-9616-03453ae10dcd 0xc002ad5f77 0xc002ad5f78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8wsds {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8wsds,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-8wsds true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ad5ff0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0028ee010}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:27 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:37 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:26 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.119,StartTime:2020-05-14 14:08:27 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-14 14:08:37 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://b5547e3229b1204d83061cce07987819d755252515433d1faa6fc08a326ab5be}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 14 14:08:47.117: INFO: Pod "nginx-deployment-7b8c6f4498-g7s45" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-g7s45,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3893,SelfLink:/api/v1/namespaces/deployment-3893/pods/nginx-deployment-7b8c6f4498-g7s45,UID:7475c1a2-c38c-4d2d-a8c8-7bc64a76f676,ResourceVersion:10867389,Generation:0,CreationTimestamp:2020-05-14 14:08:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2f582477-4dab-40d9-9616-03453ae10dcd 0xc0028ee0e7 0xc0028ee0e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8wsds {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8wsds,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-8wsds true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0028ee160} {node.kubernetes.io/unreachable Exists NoExecute 0xc0028ee180}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:26 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:33 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:33 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:26 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.116,StartTime:2020-05-14 14:08:26 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-14 14:08:31 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://f63821e52540310fa00988641b77aa505a3784026d35cdea59898b67d069d40d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 14 14:08:47.117: INFO: Pod "nginx-deployment-7b8c6f4498-gn7xq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-gn7xq,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3893,SelfLink:/api/v1/namespaces/deployment-3893/pods/nginx-deployment-7b8c6f4498-gn7xq,UID:37957e27-32ba-45a2-8f17-1c067c29e5ce,ResourceVersion:10867583,Generation:0,CreationTimestamp:2020-05-14 14:08:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2f582477-4dab-40d9-9616-03453ae10dcd 0xc0028ee257 0xc0028ee258}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8wsds {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8wsds,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-8wsds true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0028ee2d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0028ee2f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:44 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-14 14:08:44 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 14 14:08:47.117: INFO: Pod "nginx-deployment-7b8c6f4498-gs72r" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-gs72r,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3893,SelfLink:/api/v1/namespaces/deployment-3893/pods/nginx-deployment-7b8c6f4498-gs72r,UID:92ca0de5-dfe1-4911-a809-5a6133e1880a,ResourceVersion:10867425,Generation:0,CreationTimestamp:2020-05-14 14:08:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2f582477-4dab-40d9-9616-03453ae10dcd 0xc0028ee3b7 0xc0028ee3b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8wsds {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8wsds,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-8wsds true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0028ee430} {node.kubernetes.io/unreachable Exists NoExecute 0xc0028ee450}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:27 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:37 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:26 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.120,StartTime:2020-05-14 14:08:27 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-14 14:08:37 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://95e43346aea491ef3628a97c990fea4a23af1507239f6d7541792af8bf633aa7}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 14 14:08:47.118: INFO: Pod "nginx-deployment-7b8c6f4498-jztvg" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-jztvg,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3893,SelfLink:/api/v1/namespaces/deployment-3893/pods/nginx-deployment-7b8c6f4498-jztvg,UID:bc821f98-cd39-420c-aaa9-dc956bbaca5c,ResourceVersion:10867594,Generation:0,CreationTimestamp:2020-05-14 14:08:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2f582477-4dab-40d9-9616-03453ae10dcd 0xc0028ee527 0xc0028ee528}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8wsds {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8wsds,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-8wsds true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0028ee5a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0028ee5c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:44 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-14 14:08:44 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 14 14:08:47.118: INFO: Pod "nginx-deployment-7b8c6f4498-klgwz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-klgwz,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3893,SelfLink:/api/v1/namespaces/deployment-3893/pods/nginx-deployment-7b8c6f4498-klgwz,UID:d8b4cf6d-1be0-4320-8d5f-0d3d655a25a1,ResourceVersion:10867590,Generation:0,CreationTimestamp:2020-05-14 14:08:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2f582477-4dab-40d9-9616-03453ae10dcd 0xc0028ee687 0xc0028ee688}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8wsds {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8wsds,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-8wsds true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0028ee700} {node.kubernetes.io/unreachable Exists NoExecute 0xc0028ee720}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:44 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-14 14:08:44 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 14 14:08:47.118: INFO: Pod "nginx-deployment-7b8c6f4498-nmdpp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-nmdpp,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3893,SelfLink:/api/v1/namespaces/deployment-3893/pods/nginx-deployment-7b8c6f4498-nmdpp,UID:57b8c1ee-d858-4a5b-8140-68a9d375669a,ResourceVersion:10867562,Generation:0,CreationTimestamp:2020-05-14 14:08:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2f582477-4dab-40d9-9616-03453ae10dcd 0xc0028ee7e7 0xc0028ee7e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8wsds {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8wsds,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-8wsds true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0028ee860} {node.kubernetes.io/unreachable Exists NoExecute 0xc0028ee880}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:43 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-14 14:08:44 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 14 14:08:47.118: INFO: Pod "nginx-deployment-7b8c6f4498-qws2v" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-qws2v,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3893,SelfLink:/api/v1/namespaces/deployment-3893/pods/nginx-deployment-7b8c6f4498-qws2v,UID:a7d72756-d29e-424d-b3f6-9f73f1fdaf52,ResourceVersion:10867406,Generation:0,CreationTimestamp:2020-05-14 14:08:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2f582477-4dab-40d9-9616-03453ae10dcd 0xc0028ee947 0xc0028ee948}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8wsds {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8wsds,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-8wsds true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0028ee9c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0028ee9e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:27 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:36 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:36 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:26 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.118,StartTime:2020-05-14 14:08:27 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-14 14:08:35 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://5e13ab0df9e21861da35153c5512144de9e27ac4a52ff9994fdfbf68dbb48e06}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 14 14:08:47.118: INFO: Pod "nginx-deployment-7b8c6f4498-rt4r4" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-rt4r4,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3893,SelfLink:/api/v1/namespaces/deployment-3893/pods/nginx-deployment-7b8c6f4498-rt4r4,UID:b96d0702-3ab8-4817-a475-dc76da95e7dc,ResourceVersion:10867405,Generation:0,CreationTimestamp:2020-05-14 14:08:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2f582477-4dab-40d9-9616-03453ae10dcd 0xc0028eeab7 0xc0028eeab8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8wsds {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8wsds,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-8wsds true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0028eeb30} {node.kubernetes.io/unreachable Exists NoExecute 0xc0028eeb50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:27 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:36 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:36 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:26 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.189,StartTime:2020-05-14 14:08:27 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-14 14:08:35 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://a07c6601ac494e4fc24338d5afb81e14db4fab504b764886007cdf4c05bd6da8}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 14 14:08:47.119: INFO: Pod "nginx-deployment-7b8c6f4498-xgtvs" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-xgtvs,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3893,SelfLink:/api/v1/namespaces/deployment-3893/pods/nginx-deployment-7b8c6f4498-xgtvs,UID:ede0e93b-edcd-4a5d-bc1b-cea707264bc5,ResourceVersion:10867560,Generation:0,CreationTimestamp:2020-05-14 14:08:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2f582477-4dab-40d9-9616-03453ae10dcd 0xc0028eec27 0xc0028eec28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8wsds {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8wsds,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-8wsds true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0028eeca0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0028eecc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:43 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-14 14:08:44 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 14 14:08:47.119: INFO: Pod "nginx-deployment-7b8c6f4498-zr9h7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zr9h7,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3893,SelfLink:/api/v1/namespaces/deployment-3893/pods/nginx-deployment-7b8c6f4498-zr9h7,UID:3f867e1f-889b-463d-9ab7-f9b75aa6e75c,ResourceVersion:10867627,Generation:0,CreationTimestamp:2020-05-14 14:08:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2f582477-4dab-40d9-9616-03453ae10dcd 0xc0028eed97 0xc0028eed98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8wsds {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8wsds,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-8wsds true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0028eee10} {node.kubernetes.io/unreachable Exists NoExecute 0xc0028eee30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:08:44 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-14 14:08:44 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:08:47.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3893" for this suite. May 14 14:09:06.292: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:09:06.369: INFO: namespace deployment-3893 deletion completed in 18.565811217s • [SLOW TEST:39.602 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:09:06.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating cluster-info May 14 14:09:06.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' May 14 14:09:06.788: INFO: stderr: "" May 14 14:09:06.788: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:09:06.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6729" for this suite. May 14 14:09:12.812: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:09:13.038: INFO: namespace kubectl-6729 deletion completed in 6.245652439s • [SLOW TEST:6.668 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:09:13.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service endpoint-test2 in namespace services-2857 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2857 to expose endpoints map[] May 14 14:09:13.913: INFO: Get endpoints failed (2.39263ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found May 14 14:09:15.096: INFO: successfully validated that service endpoint-test2 in namespace services-2857 exposes endpoints map[] (1.185329822s elapsed) STEP: Creating pod pod1 in namespace services-2857 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2857 to expose endpoints map[pod1:[80]] May 14 14:09:19.566: INFO: successfully validated that service endpoint-test2 in namespace services-2857 exposes endpoints map[pod1:[80]] (4.463825317s elapsed) STEP: Creating pod pod2 in namespace services-2857 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2857 to expose endpoints map[pod1:[80] pod2:[80]] May 14 14:09:22.712: INFO: successfully validated that service endpoint-test2 in namespace services-2857 exposes endpoints map[pod1:[80] pod2:[80]] (3.142224682s elapsed) STEP: Deleting pod pod1 in namespace services-2857 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2857 to expose endpoints map[pod2:[80]] May 14 14:09:23.802: INFO: successfully validated that service endpoint-test2 in namespace services-2857 exposes endpoints map[pod2:[80]] (1.085373954s elapsed) STEP: Deleting pod pod2 in namespace services-2857 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2857 to expose endpoints map[] May 14 14:09:23.862: INFO: successfully validated that service endpoint-test2 in namespace services-2857 exposes endpoints map[] (55.060948ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:09:23.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2857" for this suite. May 14 14:09:46.156: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:09:46.229: INFO: namespace services-2857 deletion completed in 22.145750378s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:33.191 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:09:46.230: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override command May 14 14:09:46.464: INFO: Waiting up to 5m0s for pod "client-containers-d706f51e-5302-4467-8d46-0d5be7c7b943" in namespace "containers-7302" to be "success or failure" May 14 14:09:46.519: INFO: Pod "client-containers-d706f51e-5302-4467-8d46-0d5be7c7b943": Phase="Pending", Reason="", readiness=false. Elapsed: 54.773466ms May 14 14:09:48.523: INFO: Pod "client-containers-d706f51e-5302-4467-8d46-0d5be7c7b943": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058718741s May 14 14:09:50.528: INFO: Pod "client-containers-d706f51e-5302-4467-8d46-0d5be7c7b943": Phase="Running", Reason="", readiness=true. Elapsed: 4.063534966s May 14 14:09:52.532: INFO: Pod "client-containers-d706f51e-5302-4467-8d46-0d5be7c7b943": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.067694234s STEP: Saw pod success May 14 14:09:52.532: INFO: Pod "client-containers-d706f51e-5302-4467-8d46-0d5be7c7b943" satisfied condition "success or failure" May 14 14:09:52.535: INFO: Trying to get logs from node iruya-worker pod client-containers-d706f51e-5302-4467-8d46-0d5be7c7b943 container test-container: STEP: delete the pod May 14 14:09:52.555: INFO: Waiting for pod client-containers-d706f51e-5302-4467-8d46-0d5be7c7b943 to disappear May 14 14:09:52.558: INFO: Pod client-containers-d706f51e-5302-4467-8d46-0d5be7c7b943 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:09:52.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7302" for this suite. May 14 14:09:58.570: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:09:58.646: INFO: namespace containers-7302 deletion completed in 6.084508834s • [SLOW TEST:12.416 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:09:58.646: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container May 14 14:10:02.758: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-ceb27c2b-ee65-4d18-b579-53c2fedbc8ca -c busybox-main-container --namespace=emptydir-8334 -- cat /usr/share/volumeshare/shareddata.txt' May 14 14:10:03.050: INFO: stderr: "I0514 14:10:02.970778 2009 log.go:172] (0xc000956420) (0xc0005ea8c0) Create stream\nI0514 14:10:02.970870 2009 log.go:172] (0xc000956420) (0xc0005ea8c0) Stream added, broadcasting: 1\nI0514 14:10:02.973575 2009 log.go:172] (0xc000956420) Reply frame received for 1\nI0514 14:10:02.973632 2009 log.go:172] (0xc000956420) (0xc0008d2000) Create stream\nI0514 14:10:02.973648 2009 log.go:172] (0xc000956420) (0xc0008d2000) Stream added, broadcasting: 3\nI0514 14:10:02.974669 2009 log.go:172] (0xc000956420) Reply frame received for 3\nI0514 14:10:02.974707 2009 log.go:172] (0xc000956420) (0xc0005ea960) Create stream\nI0514 14:10:02.974721 2009 log.go:172] (0xc000956420) (0xc0005ea960) Stream added, broadcasting: 5\nI0514 14:10:02.975594 2009 log.go:172] (0xc000956420) Reply frame received for 5\nI0514 14:10:03.041704 2009 log.go:172] (0xc000956420) Data frame received for 5\nI0514 14:10:03.041749 2009 log.go:172] (0xc0005ea960) (5) Data frame handling\nI0514 14:10:03.041794 2009 log.go:172] (0xc000956420) Data frame received for 3\nI0514 14:10:03.041807 2009 log.go:172] (0xc0008d2000) (3) Data frame handling\nI0514 14:10:03.041822 2009 log.go:172] (0xc0008d2000) (3) Data frame sent\nI0514 14:10:03.041832 2009 log.go:172] (0xc000956420) Data frame received for 3\nI0514 14:10:03.041843 2009 log.go:172] (0xc0008d2000) (3) Data frame handling\nI0514 14:10:03.043286 2009 log.go:172] (0xc000956420) Data frame received for 1\nI0514 14:10:03.043362 2009 log.go:172] (0xc0005ea8c0) (1) Data frame handling\nI0514 14:10:03.043422 2009 log.go:172] (0xc0005ea8c0) (1) Data frame sent\nI0514 14:10:03.043487 2009 log.go:172] (0xc000956420) (0xc0005ea8c0) Stream removed, broadcasting: 1\nI0514 14:10:03.043931 2009 log.go:172] (0xc000956420) (0xc0005ea8c0) Stream removed, broadcasting: 1\nI0514 14:10:03.043958 2009 log.go:172] (0xc000956420) (0xc0008d2000) Stream removed, broadcasting: 3\nI0514 14:10:03.043972 2009 log.go:172] (0xc000956420) (0xc0005ea960) Stream removed, broadcasting: 5\n" May 14 14:10:03.050: INFO: stdout: "Hello from the busy-box sub-container\n" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:10:03.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8334" for this suite. May 14 14:10:09.072: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:10:09.165: INFO: namespace emptydir-8334 deletion completed in 6.110207615s • [SLOW TEST:10.519 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:10:09.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-d452c10b-8edc-4929-9b25-25638b323f33 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-d452c10b-8edc-4929-9b25-25638b323f33 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:10:15.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7033" for this suite. May 14 14:10:37.357: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:10:37.422: INFO: namespace configmap-7033 deletion completed in 22.088334283s • [SLOW TEST:28.257 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:10:37.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-5c10a852-6229-4519-9420-552ba24c059d STEP: Creating a pod to test consume configMaps May 14 14:10:37.497: INFO: Waiting up to 5m0s for pod "pod-configmaps-f34ab252-bf88-40e0-b6e3-1349addccc0c" in namespace "configmap-6992" to be "success or failure" May 14 14:10:37.499: INFO: Pod "pod-configmaps-f34ab252-bf88-40e0-b6e3-1349addccc0c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.204146ms May 14 14:10:39.504: INFO: Pod "pod-configmaps-f34ab252-bf88-40e0-b6e3-1349addccc0c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006692192s May 14 14:10:41.508: INFO: Pod "pod-configmaps-f34ab252-bf88-40e0-b6e3-1349addccc0c": Phase="Running", Reason="", readiness=true. Elapsed: 4.010463161s May 14 14:10:43.511: INFO: Pod "pod-configmaps-f34ab252-bf88-40e0-b6e3-1349addccc0c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013990954s STEP: Saw pod success May 14 14:10:43.511: INFO: Pod "pod-configmaps-f34ab252-bf88-40e0-b6e3-1349addccc0c" satisfied condition "success or failure" May 14 14:10:43.513: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-f34ab252-bf88-40e0-b6e3-1349addccc0c container configmap-volume-test: STEP: delete the pod May 14 14:10:43.588: INFO: Waiting for pod pod-configmaps-f34ab252-bf88-40e0-b6e3-1349addccc0c to disappear May 14 14:10:43.596: INFO: Pod pod-configmaps-f34ab252-bf88-40e0-b6e3-1349addccc0c no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:10:43.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6992" for this suite. May 14 14:10:49.611: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:10:49.694: INFO: namespace configmap-6992 deletion completed in 6.094525401s • [SLOW TEST:12.271 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:10:49.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium May 14 14:10:49.784: INFO: Waiting up to 5m0s for pod "pod-82c2643b-9973-4013-a000-136991ed0257" in namespace "emptydir-9756" to be "success or failure" May 14 14:10:49.787: INFO: Pod "pod-82c2643b-9973-4013-a000-136991ed0257": Phase="Pending", Reason="", readiness=false. Elapsed: 3.611366ms May 14 14:10:51.804: INFO: Pod "pod-82c2643b-9973-4013-a000-136991ed0257": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020095635s May 14 14:10:53.808: INFO: Pod "pod-82c2643b-9973-4013-a000-136991ed0257": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023966514s STEP: Saw pod success May 14 14:10:53.808: INFO: Pod "pod-82c2643b-9973-4013-a000-136991ed0257" satisfied condition "success or failure" May 14 14:10:53.810: INFO: Trying to get logs from node iruya-worker pod pod-82c2643b-9973-4013-a000-136991ed0257 container test-container: STEP: delete the pod May 14 14:10:53.830: INFO: Waiting for pod pod-82c2643b-9973-4013-a000-136991ed0257 to disappear May 14 14:10:53.874: INFO: Pod pod-82c2643b-9973-4013-a000-136991ed0257 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:10:53.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9756" for this suite. May 14 14:10:59.896: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:11:00.042: INFO: namespace emptydir-9756 deletion completed in 6.164431579s • [SLOW TEST:10.348 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:11:00.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-912133b7-0e3f-491b-8da2-9dd9a9317e76 STEP: Creating a pod to test consume configMaps May 14 14:11:00.157: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7dfa4cd3-92cd-4bb3-a7d6-9ba7d4dddaa0" in namespace "projected-2334" to be "success or failure" May 14 14:11:00.174: INFO: Pod "pod-projected-configmaps-7dfa4cd3-92cd-4bb3-a7d6-9ba7d4dddaa0": Phase="Pending", Reason="", readiness=false. Elapsed: 16.860969ms May 14 14:11:02.229: INFO: Pod "pod-projected-configmaps-7dfa4cd3-92cd-4bb3-a7d6-9ba7d4dddaa0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072136651s May 14 14:11:04.233: INFO: Pod "pod-projected-configmaps-7dfa4cd3-92cd-4bb3-a7d6-9ba7d4dddaa0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.076338544s STEP: Saw pod success May 14 14:11:04.233: INFO: Pod "pod-projected-configmaps-7dfa4cd3-92cd-4bb3-a7d6-9ba7d4dddaa0" satisfied condition "success or failure" May 14 14:11:04.236: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-7dfa4cd3-92cd-4bb3-a7d6-9ba7d4dddaa0 container projected-configmap-volume-test: STEP: delete the pod May 14 14:11:04.272: INFO: Waiting for pod pod-projected-configmaps-7dfa4cd3-92cd-4bb3-a7d6-9ba7d4dddaa0 to disappear May 14 14:11:04.282: INFO: Pod pod-projected-configmaps-7dfa4cd3-92cd-4bb3-a7d6-9ba7d4dddaa0 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:11:04.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2334" for this suite. May 14 14:11:10.322: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:11:10.402: INFO: namespace projected-2334 deletion completed in 6.117490998s • [SLOW TEST:10.358 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:11:10.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating pod May 14 14:11:14.538: INFO: Pod pod-hostip-19e3a26f-0910-43ef-a45c-82e53ea7ad7f has hostIP: 172.17.0.6 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:11:14.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6276" for this suite. May 14 14:11:36.562: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:11:36.639: INFO: namespace pods-6276 deletion completed in 22.096605981s • [SLOW TEST:26.237 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:11:36.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-bd8d5227-6584-4714-a2c0-f47b1bba9836 STEP: Creating a pod to test consume secrets May 14 14:11:36.736: INFO: Waiting up to 5m0s for pod "pod-secrets-44a3ef48-d514-404b-b7a4-b140d54c5dbd" in namespace "secrets-6952" to be "success or failure" May 14 14:11:36.740: INFO: Pod "pod-secrets-44a3ef48-d514-404b-b7a4-b140d54c5dbd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.794708ms May 14 14:11:38.745: INFO: Pod "pod-secrets-44a3ef48-d514-404b-b7a4-b140d54c5dbd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008635572s May 14 14:11:40.749: INFO: Pod "pod-secrets-44a3ef48-d514-404b-b7a4-b140d54c5dbd": Phase="Running", Reason="", readiness=true. Elapsed: 4.012954822s May 14 14:11:42.753: INFO: Pod "pod-secrets-44a3ef48-d514-404b-b7a4-b140d54c5dbd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016933448s STEP: Saw pod success May 14 14:11:42.753: INFO: Pod "pod-secrets-44a3ef48-d514-404b-b7a4-b140d54c5dbd" satisfied condition "success or failure" May 14 14:11:42.756: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-44a3ef48-d514-404b-b7a4-b140d54c5dbd container secret-env-test: STEP: delete the pod May 14 14:11:42.774: INFO: Waiting for pod pod-secrets-44a3ef48-d514-404b-b7a4-b140d54c5dbd to disappear May 14 14:11:42.790: INFO: Pod pod-secrets-44a3ef48-d514-404b-b7a4-b140d54c5dbd no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:11:42.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6952" for this suite. May 14 14:11:48.813: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:11:48.884: INFO: namespace secrets-6952 deletion completed in 6.090497444s • [SLOW TEST:12.245 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:11:48.885: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes May 14 14:11:48.974: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:12:02.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2212" for this suite. May 14 14:12:08.243: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:12:08.335: INFO: namespace pods-2212 deletion completed in 6.113683775s • [SLOW TEST:19.450 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:12:08.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 14 14:12:08.389: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:12:12.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7211" for this suite. May 14 14:12:50.585: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:12:50.700: INFO: namespace pods-7211 deletion completed in 38.134216025s • [SLOW TEST:42.365 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:12:50.701: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 14 14:12:50.837: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4f6e196b-281e-4fd3-bc29-b19176b3e739" in namespace "downward-api-1343" to be "success or failure" May 14 14:12:50.840: INFO: Pod "downwardapi-volume-4f6e196b-281e-4fd3-bc29-b19176b3e739": Phase="Pending", Reason="", readiness=false. Elapsed: 3.261304ms May 14 14:12:52.932: INFO: Pod "downwardapi-volume-4f6e196b-281e-4fd3-bc29-b19176b3e739": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094492325s May 14 14:12:54.936: INFO: Pod "downwardapi-volume-4f6e196b-281e-4fd3-bc29-b19176b3e739": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.099074056s STEP: Saw pod success May 14 14:12:54.936: INFO: Pod "downwardapi-volume-4f6e196b-281e-4fd3-bc29-b19176b3e739" satisfied condition "success or failure" May 14 14:12:54.939: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-4f6e196b-281e-4fd3-bc29-b19176b3e739 container client-container: STEP: delete the pod May 14 14:12:55.060: INFO: Waiting for pod downwardapi-volume-4f6e196b-281e-4fd3-bc29-b19176b3e739 to disappear May 14 14:12:55.104: INFO: Pod downwardapi-volume-4f6e196b-281e-4fd3-bc29-b19176b3e739 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:12:55.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1343" for this suite. May 14 14:13:01.227: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:13:01.298: INFO: namespace downward-api-1343 deletion completed in 6.189383288s • [SLOW TEST:10.597 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:13:01.298: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 14 14:13:01.382: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e81fe789-628e-4b88-97a6-aa768ecd7a46" in namespace "downward-api-3370" to be "success or failure" May 14 14:13:01.392: INFO: Pod "downwardapi-volume-e81fe789-628e-4b88-97a6-aa768ecd7a46": Phase="Pending", Reason="", readiness=false. Elapsed: 9.559778ms May 14 14:13:03.397: INFO: Pod "downwardapi-volume-e81fe789-628e-4b88-97a6-aa768ecd7a46": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014327498s May 14 14:13:05.401: INFO: Pod "downwardapi-volume-e81fe789-628e-4b88-97a6-aa768ecd7a46": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018847747s STEP: Saw pod success May 14 14:13:05.401: INFO: Pod "downwardapi-volume-e81fe789-628e-4b88-97a6-aa768ecd7a46" satisfied condition "success or failure" May 14 14:13:05.404: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-e81fe789-628e-4b88-97a6-aa768ecd7a46 container client-container: STEP: delete the pod May 14 14:13:05.426: INFO: Waiting for pod downwardapi-volume-e81fe789-628e-4b88-97a6-aa768ecd7a46 to disappear May 14 14:13:05.430: INFO: Pod downwardapi-volume-e81fe789-628e-4b88-97a6-aa768ecd7a46 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:13:05.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3370" for this suite. May 14 14:13:11.444: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:13:11.576: INFO: namespace downward-api-3370 deletion completed in 6.142459111s • [SLOW TEST:10.278 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:13:11.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override arguments May 14 14:13:11.706: INFO: Waiting up to 5m0s for pod "client-containers-b681e1b2-923e-43b5-862f-d4de6cb2cd14" in namespace "containers-9427" to be "success or failure" May 14 14:13:11.724: INFO: Pod "client-containers-b681e1b2-923e-43b5-862f-d4de6cb2cd14": Phase="Pending", Reason="", readiness=false. Elapsed: 18.282989ms May 14 14:13:13.728: INFO: Pod "client-containers-b681e1b2-923e-43b5-862f-d4de6cb2cd14": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022398195s May 14 14:13:15.731: INFO: Pod "client-containers-b681e1b2-923e-43b5-862f-d4de6cb2cd14": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025565332s STEP: Saw pod success May 14 14:13:15.731: INFO: Pod "client-containers-b681e1b2-923e-43b5-862f-d4de6cb2cd14" satisfied condition "success or failure" May 14 14:13:15.734: INFO: Trying to get logs from node iruya-worker pod client-containers-b681e1b2-923e-43b5-862f-d4de6cb2cd14 container test-container: STEP: delete the pod May 14 14:13:15.761: INFO: Waiting for pod client-containers-b681e1b2-923e-43b5-862f-d4de6cb2cd14 to disappear May 14 14:13:15.783: INFO: Pod client-containers-b681e1b2-923e-43b5-862f-d4de6cb2cd14 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:13:15.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9427" for this suite. May 14 14:13:21.905: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:13:21.966: INFO: namespace containers-9427 deletion completed in 6.17968384s • [SLOW TEST:10.390 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:13:21.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium May 14 14:13:22.065: INFO: Waiting up to 5m0s for pod "pod-1207228a-c56b-4780-b6d5-3d548e130789" in namespace "emptydir-4882" to be "success or failure" May 14 14:13:22.093: INFO: Pod "pod-1207228a-c56b-4780-b6d5-3d548e130789": Phase="Pending", Reason="", readiness=false. Elapsed: 27.456331ms May 14 14:13:24.096: INFO: Pod "pod-1207228a-c56b-4780-b6d5-3d548e130789": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03080929s May 14 14:13:26.101: INFO: Pod "pod-1207228a-c56b-4780-b6d5-3d548e130789": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035414213s STEP: Saw pod success May 14 14:13:26.101: INFO: Pod "pod-1207228a-c56b-4780-b6d5-3d548e130789" satisfied condition "success or failure" May 14 14:13:26.104: INFO: Trying to get logs from node iruya-worker2 pod pod-1207228a-c56b-4780-b6d5-3d548e130789 container test-container: STEP: delete the pod May 14 14:13:26.130: INFO: Waiting for pod pod-1207228a-c56b-4780-b6d5-3d548e130789 to disappear May 14 14:13:26.318: INFO: Pod pod-1207228a-c56b-4780-b6d5-3d548e130789 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:13:26.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4882" for this suite. May 14 14:13:32.366: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:13:32.446: INFO: namespace emptydir-4882 deletion completed in 6.123711258s • [SLOW TEST:10.480 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:13:32.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-cd72950e-9019-4ca9-8e42-b7e1e7e737ab STEP: Creating a pod to test consume secrets May 14 14:13:32.568: INFO: Waiting up to 5m0s for pod "pod-secrets-8a381246-b85b-4c34-be03-5770bf94d4e4" in namespace "secrets-8737" to be "success or failure" May 14 14:13:32.591: INFO: Pod "pod-secrets-8a381246-b85b-4c34-be03-5770bf94d4e4": Phase="Pending", Reason="", readiness=false. Elapsed: 22.828451ms May 14 14:13:34.656: INFO: Pod "pod-secrets-8a381246-b85b-4c34-be03-5770bf94d4e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088398267s May 14 14:13:36.660: INFO: Pod "pod-secrets-8a381246-b85b-4c34-be03-5770bf94d4e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.091815653s STEP: Saw pod success May 14 14:13:36.660: INFO: Pod "pod-secrets-8a381246-b85b-4c34-be03-5770bf94d4e4" satisfied condition "success or failure" May 14 14:13:36.662: INFO: Trying to get logs from node iruya-worker pod pod-secrets-8a381246-b85b-4c34-be03-5770bf94d4e4 container secret-volume-test: STEP: delete the pod May 14 14:13:36.789: INFO: Waiting for pod pod-secrets-8a381246-b85b-4c34-be03-5770bf94d4e4 to disappear May 14 14:13:36.818: INFO: Pod pod-secrets-8a381246-b85b-4c34-be03-5770bf94d4e4 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:13:36.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8737" for this suite. May 14 14:13:42.844: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:13:42.952: INFO: namespace secrets-8737 deletion completed in 6.130024765s • [SLOW TEST:10.506 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:13:42.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs May 14 14:13:43.026: INFO: Waiting up to 5m0s for pod "pod-edf7302d-df1d-4121-a7ba-88b2bb0efe14" in namespace "emptydir-7021" to be "success or failure" May 14 14:13:43.041: INFO: Pod "pod-edf7302d-df1d-4121-a7ba-88b2bb0efe14": Phase="Pending", Reason="", readiness=false. Elapsed: 15.816608ms May 14 14:13:45.045: INFO: Pod "pod-edf7302d-df1d-4121-a7ba-88b2bb0efe14": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019242673s May 14 14:13:47.123: INFO: Pod "pod-edf7302d-df1d-4121-a7ba-88b2bb0efe14": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.097421567s STEP: Saw pod success May 14 14:13:47.123: INFO: Pod "pod-edf7302d-df1d-4121-a7ba-88b2bb0efe14" satisfied condition "success or failure" May 14 14:13:47.126: INFO: Trying to get logs from node iruya-worker2 pod pod-edf7302d-df1d-4121-a7ba-88b2bb0efe14 container test-container: STEP: delete the pod May 14 14:13:47.200: INFO: Waiting for pod pod-edf7302d-df1d-4121-a7ba-88b2bb0efe14 to disappear May 14 14:13:47.512: INFO: Pod pod-edf7302d-df1d-4121-a7ba-88b2bb0efe14 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:13:47.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7021" for this suite. May 14 14:13:53.536: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:13:53.632: INFO: namespace emptydir-7021 deletion completed in 6.115232159s • [SLOW TEST:10.680 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:13:53.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating server pod server in namespace prestop-5895 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-5895 STEP: Deleting pre-stop pod May 14 14:14:06.795: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:14:06.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-5895" for this suite. May 14 14:14:44.821: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:14:44.891: INFO: namespace prestop-5895 deletion completed in 38.084160097s • [SLOW TEST:51.259 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:14:44.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-f13ff7b1-d7a0-4a48-8ecb-1e2d9a4d820d STEP: Creating a pod to test consume configMaps May 14 14:14:44.976: INFO: Waiting up to 5m0s for pod "pod-configmaps-bbc1fdf4-06ef-40b5-b3ac-46692ab23925" in namespace "configmap-7813" to be "success or failure" May 14 14:14:44.982: INFO: Pod "pod-configmaps-bbc1fdf4-06ef-40b5-b3ac-46692ab23925": Phase="Pending", Reason="", readiness=false. Elapsed: 6.69139ms May 14 14:14:46.986: INFO: Pod "pod-configmaps-bbc1fdf4-06ef-40b5-b3ac-46692ab23925": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009953099s May 14 14:14:48.988: INFO: Pod "pod-configmaps-bbc1fdf4-06ef-40b5-b3ac-46692ab23925": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012312125s STEP: Saw pod success May 14 14:14:48.988: INFO: Pod "pod-configmaps-bbc1fdf4-06ef-40b5-b3ac-46692ab23925" satisfied condition "success or failure" May 14 14:14:48.990: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-bbc1fdf4-06ef-40b5-b3ac-46692ab23925 container configmap-volume-test: STEP: delete the pod May 14 14:14:49.174: INFO: Waiting for pod pod-configmaps-bbc1fdf4-06ef-40b5-b3ac-46692ab23925 to disappear May 14 14:14:49.207: INFO: Pod pod-configmaps-bbc1fdf4-06ef-40b5-b3ac-46692ab23925 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:14:49.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7813" for this suite. May 14 14:14:55.231: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:14:55.303: INFO: namespace configmap-7813 deletion completed in 6.093165382s • [SLOW TEST:10.412 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:14:55.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 14 14:14:55.348: INFO: (0) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 4.967131ms) May 14 14:14:55.352: INFO: (1) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.222703ms) May 14 14:14:55.355: INFO: (2) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.305223ms) May 14 14:14:55.358: INFO: (3) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.929859ms) May 14 14:14:55.362: INFO: (4) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.533378ms) May 14 14:14:55.365: INFO: (5) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.891462ms) May 14 14:14:55.368: INFO: (6) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.989419ms) May 14 14:14:55.372: INFO: (7) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.400521ms) May 14 14:14:55.375: INFO: (8) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.339278ms) May 14 14:14:55.378: INFO: (9) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.823628ms) May 14 14:14:55.381: INFO: (10) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.958742ms) May 14 14:14:55.384: INFO: (11) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.946819ms) May 14 14:14:55.388: INFO: (12) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.503904ms) May 14 14:14:55.391: INFO: (13) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.556095ms) May 14 14:14:55.395: INFO: (14) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.600599ms) May 14 14:14:55.398: INFO: (15) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.324292ms) May 14 14:14:55.401: INFO: (16) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.103692ms) May 14 14:14:55.404: INFO: (17) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.846252ms) May 14 14:14:55.407: INFO: (18) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.041019ms) May 14 14:14:55.442: INFO: (19) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 34.266823ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:14:55.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-3013" for this suite. May 14 14:15:01.498: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:15:01.559: INFO: namespace proxy-3013 deletion completed in 6.112988203s • [SLOW TEST:6.255 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:15:01.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-fa81582f-43cd-4660-9c1c-4ebb12c29428 STEP: Creating secret with name s-test-opt-upd-5a54f379-6004-4848-ae28-569e5c2a991f STEP: Creating the pod STEP: Deleting secret s-test-opt-del-fa81582f-43cd-4660-9c1c-4ebb12c29428 STEP: Updating secret s-test-opt-upd-5a54f379-6004-4848-ae28-569e5c2a991f STEP: Creating secret with name s-test-opt-create-02ef59c5-7ce7-4af2-93b0-dcee1b92a99b STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:16:20.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6412" for this suite. May 14 14:16:42.173: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:16:42.244: INFO: namespace secrets-6412 deletion completed in 22.118777582s • [SLOW TEST:100.685 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:16:42.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 14 14:16:42.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-988' May 14 14:16:44.990: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 14 14:16:44.990: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 May 14 14:16:49.113: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-988' May 14 14:16:49.286: INFO: stderr: "" May 14 14:16:49.286: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:16:49.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-988" for this suite. May 14 14:17:11.324: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:17:11.391: INFO: namespace kubectl-988 deletion completed in 22.100979972s • [SLOW TEST:29.147 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:17:11.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's command May 14 14:17:11.543: INFO: Waiting up to 5m0s for pod "var-expansion-970c3d6d-a6b8-4f4f-8d13-52c259a0be67" in namespace "var-expansion-2438" to be "success or failure" May 14 14:17:11.560: INFO: Pod "var-expansion-970c3d6d-a6b8-4f4f-8d13-52c259a0be67": Phase="Pending", Reason="", readiness=false. Elapsed: 17.150776ms May 14 14:17:13.564: INFO: Pod "var-expansion-970c3d6d-a6b8-4f4f-8d13-52c259a0be67": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021635048s May 14 14:17:15.569: INFO: Pod "var-expansion-970c3d6d-a6b8-4f4f-8d13-52c259a0be67": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02574551s STEP: Saw pod success May 14 14:17:15.569: INFO: Pod "var-expansion-970c3d6d-a6b8-4f4f-8d13-52c259a0be67" satisfied condition "success or failure" May 14 14:17:15.572: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-970c3d6d-a6b8-4f4f-8d13-52c259a0be67 container dapi-container: STEP: delete the pod May 14 14:17:15.817: INFO: Waiting for pod var-expansion-970c3d6d-a6b8-4f4f-8d13-52c259a0be67 to disappear May 14 14:17:15.934: INFO: Pod var-expansion-970c3d6d-a6b8-4f4f-8d13-52c259a0be67 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:17:15.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2438" for this suite. May 14 14:17:21.969: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:17:22.044: INFO: namespace var-expansion-2438 deletion completed in 6.107935739s • [SLOW TEST:10.653 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:17:22.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Starting the proxy May 14 14:17:22.106: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix052069354/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:17:22.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2586" for this suite. May 14 14:17:28.190: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:17:28.282: INFO: namespace kubectl-2586 deletion completed in 6.10917071s • [SLOW TEST:6.238 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:17:28.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod May 14 14:17:32.893: INFO: Successfully updated pod "annotationupdateafbee396-892b-4134-bf37-3d8fe6dfe02c" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:17:34.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6113" for this suite. May 14 14:17:56.946: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:17:57.043: INFO: namespace projected-6113 deletion completed in 22.122711504s • [SLOW TEST:28.761 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:17:57.043: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-1c65d2f8-112f-4316-89f5-e4a4f809137b STEP: Creating a pod to test consume configMaps May 14 14:17:57.107: INFO: Waiting up to 5m0s for pod "pod-configmaps-31dbeb70-9796-42c9-a513-adaa02061ebe" in namespace "configmap-4202" to be "success or failure" May 14 14:17:57.120: INFO: Pod "pod-configmaps-31dbeb70-9796-42c9-a513-adaa02061ebe": Phase="Pending", Reason="", readiness=false. Elapsed: 12.595778ms May 14 14:17:59.144: INFO: Pod "pod-configmaps-31dbeb70-9796-42c9-a513-adaa02061ebe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036662276s May 14 14:18:01.149: INFO: Pod "pod-configmaps-31dbeb70-9796-42c9-a513-adaa02061ebe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041099716s STEP: Saw pod success May 14 14:18:01.149: INFO: Pod "pod-configmaps-31dbeb70-9796-42c9-a513-adaa02061ebe" satisfied condition "success or failure" May 14 14:18:01.152: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-31dbeb70-9796-42c9-a513-adaa02061ebe container configmap-volume-test: STEP: delete the pod May 14 14:18:01.212: INFO: Waiting for pod pod-configmaps-31dbeb70-9796-42c9-a513-adaa02061ebe to disappear May 14 14:18:01.230: INFO: Pod pod-configmaps-31dbeb70-9796-42c9-a513-adaa02061ebe no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:18:01.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4202" for this suite. May 14 14:18:07.245: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:18:07.411: INFO: namespace configmap-4202 deletion completed in 6.177571807s • [SLOW TEST:10.368 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:18:07.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-5090/configmap-test-dedbcaa0-89ca-4769-91d1-7057e5cad0e0 STEP: Creating a pod to test consume configMaps May 14 14:18:07.708: INFO: Waiting up to 5m0s for pod "pod-configmaps-c6f1fe7e-1c62-4e8b-aea0-323765f7547e" in namespace "configmap-5090" to be "success or failure" May 14 14:18:07.716: INFO: Pod "pod-configmaps-c6f1fe7e-1c62-4e8b-aea0-323765f7547e": Phase="Pending", Reason="", readiness=false. Elapsed: 7.245636ms May 14 14:18:09.749: INFO: Pod "pod-configmaps-c6f1fe7e-1c62-4e8b-aea0-323765f7547e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040772679s May 14 14:18:11.753: INFO: Pod "pod-configmaps-c6f1fe7e-1c62-4e8b-aea0-323765f7547e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044593415s STEP: Saw pod success May 14 14:18:11.753: INFO: Pod "pod-configmaps-c6f1fe7e-1c62-4e8b-aea0-323765f7547e" satisfied condition "success or failure" May 14 14:18:11.756: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-c6f1fe7e-1c62-4e8b-aea0-323765f7547e container env-test: STEP: delete the pod May 14 14:18:11.777: INFO: Waiting for pod pod-configmaps-c6f1fe7e-1c62-4e8b-aea0-323765f7547e to disappear May 14 14:18:11.796: INFO: Pod pod-configmaps-c6f1fe7e-1c62-4e8b-aea0-323765f7547e no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:18:11.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5090" for this suite. May 14 14:18:17.845: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:18:17.907: INFO: namespace configmap-5090 deletion completed in 6.107395435s • [SLOW TEST:10.496 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:18:17.907: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 14 14:18:18.031: INFO: (0) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.322531ms) May 14 14:18:18.033: INFO: (1) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.680311ms) May 14 14:18:18.036: INFO: (2) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.197557ms) May 14 14:18:18.038: INFO: (3) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.261315ms) May 14 14:18:18.041: INFO: (4) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.64051ms) May 14 14:18:18.043: INFO: (5) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.439617ms) May 14 14:18:18.046: INFO: (6) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.506898ms) May 14 14:18:18.067: INFO: (7) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 20.820199ms) May 14 14:18:18.070: INFO: (8) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.078885ms) May 14 14:18:18.073: INFO: (9) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.554409ms) May 14 14:18:18.076: INFO: (10) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.860912ms) May 14 14:18:18.079: INFO: (11) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.508852ms) May 14 14:18:18.082: INFO: (12) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.938416ms) May 14 14:18:18.084: INFO: (13) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.454077ms) May 14 14:18:18.087: INFO: (14) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.708129ms) May 14 14:18:18.089: INFO: (15) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.369467ms) May 14 14:18:18.092: INFO: (16) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.354615ms) May 14 14:18:18.094: INFO: (17) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.539772ms) May 14 14:18:18.097: INFO: (18) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.307504ms) May 14 14:18:18.099: INFO: (19) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.819652ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:18:18.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-7895" for this suite. May 14 14:18:24.114: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:18:24.195: INFO: namespace proxy-7895 deletion completed in 6.092460177s • [SLOW TEST:6.287 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:18:24.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:18:28.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4486" for this suite. May 14 14:19:14.451: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:19:14.524: INFO: namespace kubelet-test-4486 deletion completed in 46.116088998s • [SLOW TEST:50.330 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:19:14.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 14 14:19:14.676: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 14:19:14.687: INFO: Number of nodes with available pods: 0 May 14 14:19:14.687: INFO: Node iruya-worker is running more than one daemon pod May 14 14:19:15.692: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 14:19:15.695: INFO: Number of nodes with available pods: 0 May 14 14:19:15.695: INFO: Node iruya-worker is running more than one daemon pod May 14 14:19:16.727: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 14:19:16.730: INFO: Number of nodes with available pods: 0 May 14 14:19:16.730: INFO: Node iruya-worker is running more than one daemon pod May 14 14:19:17.691: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 14:19:17.695: INFO: Number of nodes with available pods: 0 May 14 14:19:17.695: INFO: Node iruya-worker is running more than one daemon pod May 14 14:19:18.718: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 14:19:18.763: INFO: Number of nodes with available pods: 1 May 14 14:19:18.763: INFO: Node iruya-worker is running more than one daemon pod May 14 14:19:19.691: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 14:19:19.694: INFO: Number of nodes with available pods: 2 May 14 14:19:19.694: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 14 14:19:19.754: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 14:19:19.771: INFO: Number of nodes with available pods: 2 May 14 14:19:19.771: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5123, will wait for the garbage collector to delete the pods May 14 14:19:20.943: INFO: Deleting DaemonSet.extensions daemon-set took: 77.757905ms May 14 14:19:21.243: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.197942ms May 14 14:19:24.747: INFO: Number of nodes with available pods: 0 May 14 14:19:24.747: INFO: Number of running nodes: 0, number of available pods: 0 May 14 14:19:24.750: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5123/daemonsets","resourceVersion":"10869944"},"items":null} May 14 14:19:24.752: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5123/pods","resourceVersion":"10869944"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:19:24.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5123" for this suite. May 14 14:19:30.781: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:19:30.863: INFO: namespace daemonsets-5123 deletion completed in 6.097886432s • [SLOW TEST:16.338 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:19:30.864: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-2129 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating stateful set ss in namespace statefulset-2129 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-2129 May 14 14:19:30.969: INFO: Found 0 stateful pods, waiting for 1 May 14 14:19:40.973: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 14 14:19:40.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2129 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 14 14:19:41.235: INFO: stderr: "I0514 14:19:41.116637 2097 log.go:172] (0xc0006b0420) (0xc000352820) Create stream\nI0514 14:19:41.116690 2097 log.go:172] (0xc0006b0420) (0xc000352820) Stream added, broadcasting: 1\nI0514 14:19:41.119517 2097 log.go:172] (0xc0006b0420) Reply frame received for 1\nI0514 14:19:41.119633 2097 log.go:172] (0xc0006b0420) (0xc0008a8000) Create stream\nI0514 14:19:41.119678 2097 log.go:172] (0xc0006b0420) (0xc0008a8000) Stream added, broadcasting: 3\nI0514 14:19:41.121656 2097 log.go:172] (0xc0006b0420) Reply frame received for 3\nI0514 14:19:41.121706 2097 log.go:172] (0xc0006b0420) (0xc0008a80a0) Create stream\nI0514 14:19:41.121721 2097 log.go:172] (0xc0006b0420) (0xc0008a80a0) Stream added, broadcasting: 5\nI0514 14:19:41.122937 2097 log.go:172] (0xc0006b0420) Reply frame received for 5\nI0514 14:19:41.201641 2097 log.go:172] (0xc0006b0420) Data frame received for 5\nI0514 14:19:41.201676 2097 log.go:172] (0xc0008a80a0) (5) Data frame handling\nI0514 14:19:41.201695 2097 log.go:172] (0xc0008a80a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0514 14:19:41.227122 2097 log.go:172] (0xc0006b0420) Data frame received for 3\nI0514 14:19:41.227168 2097 log.go:172] (0xc0008a8000) (3) Data frame handling\nI0514 14:19:41.227190 2097 log.go:172] (0xc0008a8000) (3) Data frame sent\nI0514 14:19:41.227205 2097 log.go:172] (0xc0006b0420) Data frame received for 3\nI0514 14:19:41.227218 2097 log.go:172] (0xc0008a8000) (3) Data frame handling\nI0514 14:19:41.227507 2097 log.go:172] (0xc0006b0420) Data frame received for 5\nI0514 14:19:41.227539 2097 log.go:172] (0xc0008a80a0) (5) Data frame handling\nI0514 14:19:41.229723 2097 log.go:172] (0xc0006b0420) Data frame received for 1\nI0514 14:19:41.229747 2097 log.go:172] (0xc000352820) (1) Data frame handling\nI0514 14:19:41.229765 2097 log.go:172] (0xc000352820) (1) Data frame sent\nI0514 14:19:41.230106 2097 log.go:172] (0xc0006b0420) (0xc000352820) Stream removed, broadcasting: 1\nI0514 14:19:41.230141 2097 log.go:172] (0xc0006b0420) Go away received\nI0514 14:19:41.230534 2097 log.go:172] (0xc0006b0420) (0xc000352820) Stream removed, broadcasting: 1\nI0514 14:19:41.230559 2097 log.go:172] (0xc0006b0420) (0xc0008a8000) Stream removed, broadcasting: 3\nI0514 14:19:41.230571 2097 log.go:172] (0xc0006b0420) (0xc0008a80a0) Stream removed, broadcasting: 5\n" May 14 14:19:41.235: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 14 14:19:41.235: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 14 14:19:41.239: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 14 14:19:51.244: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 14 14:19:51.244: INFO: Waiting for statefulset status.replicas updated to 0 May 14 14:19:51.257: INFO: POD NODE PHASE GRACE CONDITIONS May 14 14:19:51.257: INFO: ss-0 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:19:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:19:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:19:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:19:30 +0000 UTC }] May 14 14:19:51.257: INFO: May 14 14:19:51.257: INFO: StatefulSet ss has not reached scale 3, at 1 May 14 14:19:52.283: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.9966635s May 14 14:19:53.516: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.970500589s May 14 14:19:54.596: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.738321364s May 14 14:19:55.602: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.657187878s May 14 14:19:56.649: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.651887937s May 14 14:19:57.654: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.604594099s May 14 14:19:58.659: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.600119458s May 14 14:19:59.664: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.595287502s May 14 14:20:00.669: INFO: Verifying statefulset ss doesn't scale past 3 for another 589.928242ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-2129 May 14 14:20:01.674: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2129 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 14 14:20:01.910: INFO: stderr: "I0514 14:20:01.808217 2120 log.go:172] (0xc000a364d0) (0xc000522820) Create stream\nI0514 14:20:01.808308 2120 log.go:172] (0xc000a364d0) (0xc000522820) Stream added, broadcasting: 1\nI0514 14:20:01.813425 2120 log.go:172] (0xc000a364d0) Reply frame received for 1\nI0514 14:20:01.813464 2120 log.go:172] (0xc000a364d0) (0xc000522000) Create stream\nI0514 14:20:01.813478 2120 log.go:172] (0xc000a364d0) (0xc000522000) Stream added, broadcasting: 3\nI0514 14:20:01.814500 2120 log.go:172] (0xc000a364d0) Reply frame received for 3\nI0514 14:20:01.814538 2120 log.go:172] (0xc000a364d0) (0xc0007141e0) Create stream\nI0514 14:20:01.814549 2120 log.go:172] (0xc000a364d0) (0xc0007141e0) Stream added, broadcasting: 5\nI0514 14:20:01.815562 2120 log.go:172] (0xc000a364d0) Reply frame received for 5\nI0514 14:20:01.903729 2120 log.go:172] (0xc000a364d0) Data frame received for 5\nI0514 14:20:01.903764 2120 log.go:172] (0xc0007141e0) (5) Data frame handling\nI0514 14:20:01.903778 2120 log.go:172] (0xc0007141e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0514 14:20:01.903792 2120 log.go:172] (0xc000a364d0) Data frame received for 3\nI0514 14:20:01.903798 2120 log.go:172] (0xc000522000) (3) Data frame handling\nI0514 14:20:01.903805 2120 log.go:172] (0xc000522000) (3) Data frame sent\nI0514 14:20:01.903811 2120 log.go:172] (0xc000a364d0) Data frame received for 3\nI0514 14:20:01.903816 2120 log.go:172] (0xc000522000) (3) Data frame handling\nI0514 14:20:01.903935 2120 log.go:172] (0xc000a364d0) Data frame received for 5\nI0514 14:20:01.903963 2120 log.go:172] (0xc0007141e0) (5) Data frame handling\nI0514 14:20:01.905616 2120 log.go:172] (0xc000a364d0) Data frame received for 1\nI0514 14:20:01.905638 2120 log.go:172] (0xc000522820) (1) Data frame handling\nI0514 14:20:01.905648 2120 log.go:172] (0xc000522820) (1) Data frame sent\nI0514 14:20:01.905669 2120 log.go:172] (0xc000a364d0) (0xc000522820) Stream removed, broadcasting: 1\nI0514 14:20:01.905692 2120 log.go:172] (0xc000a364d0) Go away received\nI0514 14:20:01.906015 2120 log.go:172] (0xc000a364d0) (0xc000522820) Stream removed, broadcasting: 1\nI0514 14:20:01.906036 2120 log.go:172] (0xc000a364d0) (0xc000522000) Stream removed, broadcasting: 3\nI0514 14:20:01.906045 2120 log.go:172] (0xc000a364d0) (0xc0007141e0) Stream removed, broadcasting: 5\n" May 14 14:20:01.910: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 14 14:20:01.910: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 14 14:20:01.910: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2129 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 14 14:20:02.113: INFO: stderr: "I0514 14:20:02.035872 2140 log.go:172] (0xc0001166e0) (0xc0009f26e0) Create stream\nI0514 14:20:02.035937 2140 log.go:172] (0xc0001166e0) (0xc0009f26e0) Stream added, broadcasting: 1\nI0514 14:20:02.038793 2140 log.go:172] (0xc0001166e0) Reply frame received for 1\nI0514 14:20:02.038835 2140 log.go:172] (0xc0001166e0) (0xc000954000) Create stream\nI0514 14:20:02.038849 2140 log.go:172] (0xc0001166e0) (0xc000954000) Stream added, broadcasting: 3\nI0514 14:20:02.039886 2140 log.go:172] (0xc0001166e0) Reply frame received for 3\nI0514 14:20:02.039917 2140 log.go:172] (0xc0001166e0) (0xc0009f2780) Create stream\nI0514 14:20:02.039927 2140 log.go:172] (0xc0001166e0) (0xc0009f2780) Stream added, broadcasting: 5\nI0514 14:20:02.040859 2140 log.go:172] (0xc0001166e0) Reply frame received for 5\nI0514 14:20:02.105532 2140 log.go:172] (0xc0001166e0) Data frame received for 5\nI0514 14:20:02.105653 2140 log.go:172] (0xc0009f2780) (5) Data frame handling\nI0514 14:20:02.105689 2140 log.go:172] (0xc0009f2780) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0514 14:20:02.105867 2140 log.go:172] (0xc0001166e0) Data frame received for 5\nI0514 14:20:02.105899 2140 log.go:172] (0xc0009f2780) (5) Data frame handling\nI0514 14:20:02.105949 2140 log.go:172] (0xc0001166e0) Data frame received for 3\nI0514 14:20:02.105982 2140 log.go:172] (0xc000954000) (3) Data frame handling\nI0514 14:20:02.106004 2140 log.go:172] (0xc000954000) (3) Data frame sent\nI0514 14:20:02.106017 2140 log.go:172] (0xc0001166e0) Data frame received for 3\nI0514 14:20:02.106027 2140 log.go:172] (0xc000954000) (3) Data frame handling\nI0514 14:20:02.107295 2140 log.go:172] (0xc0001166e0) Data frame received for 1\nI0514 14:20:02.107319 2140 log.go:172] (0xc0009f26e0) (1) Data frame handling\nI0514 14:20:02.107350 2140 log.go:172] (0xc0009f26e0) (1) Data frame sent\nI0514 14:20:02.107389 2140 log.go:172] (0xc0001166e0) (0xc0009f26e0) Stream removed, broadcasting: 1\nI0514 14:20:02.107565 2140 log.go:172] (0xc0001166e0) Go away received\nI0514 14:20:02.107993 2140 log.go:172] (0xc0001166e0) (0xc0009f26e0) Stream removed, broadcasting: 1\nI0514 14:20:02.108026 2140 log.go:172] (0xc0001166e0) (0xc000954000) Stream removed, broadcasting: 3\nI0514 14:20:02.108046 2140 log.go:172] (0xc0001166e0) (0xc0009f2780) Stream removed, broadcasting: 5\n" May 14 14:20:02.113: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 14 14:20:02.113: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 14 14:20:02.113: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2129 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 14 14:20:02.357: INFO: stderr: "I0514 14:20:02.291544 2162 log.go:172] (0xc00099a840) (0xc00095aaa0) Create stream\nI0514 14:20:02.291603 2162 log.go:172] (0xc00099a840) (0xc00095aaa0) Stream added, broadcasting: 1\nI0514 14:20:02.294528 2162 log.go:172] (0xc00099a840) Reply frame received for 1\nI0514 14:20:02.294591 2162 log.go:172] (0xc00099a840) (0xc00095a000) Create stream\nI0514 14:20:02.294604 2162 log.go:172] (0xc00099a840) (0xc00095a000) Stream added, broadcasting: 3\nI0514 14:20:02.295416 2162 log.go:172] (0xc00099a840) Reply frame received for 3\nI0514 14:20:02.295459 2162 log.go:172] (0xc00099a840) (0xc0004101e0) Create stream\nI0514 14:20:02.295487 2162 log.go:172] (0xc00099a840) (0xc0004101e0) Stream added, broadcasting: 5\nI0514 14:20:02.296227 2162 log.go:172] (0xc00099a840) Reply frame received for 5\nI0514 14:20:02.350392 2162 log.go:172] (0xc00099a840) Data frame received for 3\nI0514 14:20:02.350422 2162 log.go:172] (0xc00095a000) (3) Data frame handling\nI0514 14:20:02.350432 2162 log.go:172] (0xc00095a000) (3) Data frame sent\nI0514 14:20:02.350437 2162 log.go:172] (0xc00099a840) Data frame received for 3\nI0514 14:20:02.350443 2162 log.go:172] (0xc00095a000) (3) Data frame handling\nI0514 14:20:02.350476 2162 log.go:172] (0xc00099a840) Data frame received for 5\nI0514 14:20:02.350490 2162 log.go:172] (0xc0004101e0) (5) Data frame handling\nI0514 14:20:02.350501 2162 log.go:172] (0xc0004101e0) (5) Data frame sent\nI0514 14:20:02.350509 2162 log.go:172] (0xc00099a840) Data frame received for 5\nI0514 14:20:02.350523 2162 log.go:172] (0xc0004101e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0514 14:20:02.351954 2162 log.go:172] (0xc00099a840) Data frame received for 1\nI0514 14:20:02.351973 2162 log.go:172] (0xc00095aaa0) (1) Data frame handling\nI0514 14:20:02.351984 2162 log.go:172] (0xc00095aaa0) (1) Data frame sent\nI0514 14:20:02.351995 2162 log.go:172] (0xc00099a840) (0xc00095aaa0) Stream removed, broadcasting: 1\nI0514 14:20:02.352010 2162 log.go:172] (0xc00099a840) Go away received\nI0514 14:20:02.352401 2162 log.go:172] (0xc00099a840) (0xc00095aaa0) Stream removed, broadcasting: 1\nI0514 14:20:02.352419 2162 log.go:172] (0xc00099a840) (0xc00095a000) Stream removed, broadcasting: 3\nI0514 14:20:02.352426 2162 log.go:172] (0xc00099a840) (0xc0004101e0) Stream removed, broadcasting: 5\n" May 14 14:20:02.357: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 14 14:20:02.357: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 14 14:20:02.367: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 14 14:20:02.367: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 14 14:20:02.367: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 14 14:20:02.369: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2129 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 14 14:20:02.609: INFO: stderr: "I0514 14:20:02.515886 2182 log.go:172] (0xc000a542c0) (0xc00099a5a0) Create stream\nI0514 14:20:02.515948 2182 log.go:172] (0xc000a542c0) (0xc00099a5a0) Stream added, broadcasting: 1\nI0514 14:20:02.519660 2182 log.go:172] (0xc000a542c0) Reply frame received for 1\nI0514 14:20:02.519727 2182 log.go:172] (0xc000a542c0) (0xc00083c000) Create stream\nI0514 14:20:02.519760 2182 log.go:172] (0xc000a542c0) (0xc00083c000) Stream added, broadcasting: 3\nI0514 14:20:02.520849 2182 log.go:172] (0xc000a542c0) Reply frame received for 3\nI0514 14:20:02.520914 2182 log.go:172] (0xc000a542c0) (0xc00099a6e0) Create stream\nI0514 14:20:02.520931 2182 log.go:172] (0xc000a542c0) (0xc00099a6e0) Stream added, broadcasting: 5\nI0514 14:20:02.522363 2182 log.go:172] (0xc000a542c0) Reply frame received for 5\nI0514 14:20:02.601366 2182 log.go:172] (0xc000a542c0) Data frame received for 5\nI0514 14:20:02.601424 2182 log.go:172] (0xc00099a6e0) (5) Data frame handling\nI0514 14:20:02.601443 2182 log.go:172] (0xc00099a6e0) (5) Data frame sent\nI0514 14:20:02.601457 2182 log.go:172] (0xc000a542c0) Data frame received for 5\nI0514 14:20:02.601468 2182 log.go:172] (0xc00099a6e0) (5) Data frame handling\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0514 14:20:02.601498 2182 log.go:172] (0xc000a542c0) Data frame received for 3\nI0514 14:20:02.601523 2182 log.go:172] (0xc00083c000) (3) Data frame handling\nI0514 14:20:02.601544 2182 log.go:172] (0xc00083c000) (3) Data frame sent\nI0514 14:20:02.601562 2182 log.go:172] (0xc000a542c0) Data frame received for 3\nI0514 14:20:02.601573 2182 log.go:172] (0xc00083c000) (3) Data frame handling\nI0514 14:20:02.603344 2182 log.go:172] (0xc000a542c0) Data frame received for 1\nI0514 14:20:02.603374 2182 log.go:172] (0xc00099a5a0) (1) Data frame handling\nI0514 14:20:02.603409 2182 log.go:172] (0xc00099a5a0) (1) Data frame sent\nI0514 14:20:02.603434 2182 log.go:172] (0xc000a542c0) (0xc00099a5a0) Stream removed, broadcasting: 1\nI0514 14:20:02.603703 2182 log.go:172] (0xc000a542c0) Go away received\nI0514 14:20:02.603970 2182 log.go:172] (0xc000a542c0) (0xc00099a5a0) Stream removed, broadcasting: 1\nI0514 14:20:02.604025 2182 log.go:172] (0xc000a542c0) (0xc00083c000) Stream removed, broadcasting: 3\nI0514 14:20:02.604046 2182 log.go:172] (0xc000a542c0) (0xc00099a6e0) Stream removed, broadcasting: 5\n" May 14 14:20:02.609: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 14 14:20:02.609: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 14 14:20:02.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2129 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 14 14:20:02.859: INFO: stderr: "I0514 14:20:02.738869 2204 log.go:172] (0xc00013cfd0) (0xc0005c6a00) Create stream\nI0514 14:20:02.738938 2204 log.go:172] (0xc00013cfd0) (0xc0005c6a00) Stream added, broadcasting: 1\nI0514 14:20:02.742651 2204 log.go:172] (0xc00013cfd0) Reply frame received for 1\nI0514 14:20:02.742690 2204 log.go:172] (0xc00013cfd0) (0xc0003ec000) Create stream\nI0514 14:20:02.742706 2204 log.go:172] (0xc00013cfd0) (0xc0003ec000) Stream added, broadcasting: 3\nI0514 14:20:02.743486 2204 log.go:172] (0xc00013cfd0) Reply frame received for 3\nI0514 14:20:02.743522 2204 log.go:172] (0xc00013cfd0) (0xc0005c6280) Create stream\nI0514 14:20:02.743531 2204 log.go:172] (0xc00013cfd0) (0xc0005c6280) Stream added, broadcasting: 5\nI0514 14:20:02.744577 2204 log.go:172] (0xc00013cfd0) Reply frame received for 5\nI0514 14:20:02.811930 2204 log.go:172] (0xc00013cfd0) Data frame received for 5\nI0514 14:20:02.811956 2204 log.go:172] (0xc0005c6280) (5) Data frame handling\nI0514 14:20:02.811972 2204 log.go:172] (0xc0005c6280) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0514 14:20:02.849721 2204 log.go:172] (0xc00013cfd0) Data frame received for 3\nI0514 14:20:02.849754 2204 log.go:172] (0xc0003ec000) (3) Data frame handling\nI0514 14:20:02.849777 2204 log.go:172] (0xc0003ec000) (3) Data frame sent\nI0514 14:20:02.850007 2204 log.go:172] (0xc00013cfd0) Data frame received for 3\nI0514 14:20:02.850036 2204 log.go:172] (0xc0003ec000) (3) Data frame handling\nI0514 14:20:02.850281 2204 log.go:172] (0xc00013cfd0) Data frame received for 5\nI0514 14:20:02.850314 2204 log.go:172] (0xc0005c6280) (5) Data frame handling\nI0514 14:20:02.852051 2204 log.go:172] (0xc00013cfd0) Data frame received for 1\nI0514 14:20:02.852084 2204 log.go:172] (0xc0005c6a00) (1) Data frame handling\nI0514 14:20:02.852103 2204 log.go:172] (0xc0005c6a00) (1) Data frame sent\nI0514 14:20:02.852127 2204 log.go:172] (0xc00013cfd0) (0xc0005c6a00) Stream removed, broadcasting: 1\nI0514 14:20:02.852173 2204 log.go:172] (0xc00013cfd0) Go away received\nI0514 14:20:02.852566 2204 log.go:172] (0xc00013cfd0) (0xc0005c6a00) Stream removed, broadcasting: 1\nI0514 14:20:02.852594 2204 log.go:172] (0xc00013cfd0) (0xc0003ec000) Stream removed, broadcasting: 3\nI0514 14:20:02.852600 2204 log.go:172] (0xc00013cfd0) (0xc0005c6280) Stream removed, broadcasting: 5\n" May 14 14:20:02.859: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 14 14:20:02.859: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 14 14:20:02.859: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2129 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 14 14:20:03.089: INFO: stderr: "I0514 14:20:02.986285 2226 log.go:172] (0xc000a2e370) (0xc0005006e0) Create stream\nI0514 14:20:02.986343 2226 log.go:172] (0xc000a2e370) (0xc0005006e0) Stream added, broadcasting: 1\nI0514 14:20:02.990290 2226 log.go:172] (0xc000a2e370) Reply frame received for 1\nI0514 14:20:02.990340 2226 log.go:172] (0xc000a2e370) (0xc000500000) Create stream\nI0514 14:20:02.990353 2226 log.go:172] (0xc000a2e370) (0xc000500000) Stream added, broadcasting: 3\nI0514 14:20:02.991468 2226 log.go:172] (0xc000a2e370) Reply frame received for 3\nI0514 14:20:02.991528 2226 log.go:172] (0xc000a2e370) (0xc0005000a0) Create stream\nI0514 14:20:02.991548 2226 log.go:172] (0xc000a2e370) (0xc0005000a0) Stream added, broadcasting: 5\nI0514 14:20:02.992594 2226 log.go:172] (0xc000a2e370) Reply frame received for 5\nI0514 14:20:03.057666 2226 log.go:172] (0xc000a2e370) Data frame received for 5\nI0514 14:20:03.057696 2226 log.go:172] (0xc0005000a0) (5) Data frame handling\nI0514 14:20:03.057715 2226 log.go:172] (0xc0005000a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0514 14:20:03.083317 2226 log.go:172] (0xc000a2e370) Data frame received for 3\nI0514 14:20:03.083361 2226 log.go:172] (0xc000500000) (3) Data frame handling\nI0514 14:20:03.083397 2226 log.go:172] (0xc000500000) (3) Data frame sent\nI0514 14:20:03.083426 2226 log.go:172] (0xc000a2e370) Data frame received for 3\nI0514 14:20:03.083438 2226 log.go:172] (0xc000500000) (3) Data frame handling\nI0514 14:20:03.083675 2226 log.go:172] (0xc000a2e370) Data frame received for 5\nI0514 14:20:03.083702 2226 log.go:172] (0xc0005000a0) (5) Data frame handling\nI0514 14:20:03.084980 2226 log.go:172] (0xc000a2e370) Data frame received for 1\nI0514 14:20:03.085009 2226 log.go:172] (0xc0005006e0) (1) Data frame handling\nI0514 14:20:03.085023 2226 log.go:172] (0xc0005006e0) (1) Data frame sent\nI0514 14:20:03.085048 2226 log.go:172] (0xc000a2e370) (0xc0005006e0) Stream removed, broadcasting: 1\nI0514 14:20:03.085069 2226 log.go:172] (0xc000a2e370) Go away received\nI0514 14:20:03.085394 2226 log.go:172] (0xc000a2e370) (0xc0005006e0) Stream removed, broadcasting: 1\nI0514 14:20:03.085414 2226 log.go:172] (0xc000a2e370) (0xc000500000) Stream removed, broadcasting: 3\nI0514 14:20:03.085421 2226 log.go:172] (0xc000a2e370) (0xc0005000a0) Stream removed, broadcasting: 5\n" May 14 14:20:03.089: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 14 14:20:03.089: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 14 14:20:03.089: INFO: Waiting for statefulset status.replicas updated to 0 May 14 14:20:03.093: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 May 14 14:20:13.102: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 14 14:20:13.102: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 14 14:20:13.102: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 14 14:20:13.120: INFO: POD NODE PHASE GRACE CONDITIONS May 14 14:20:13.120: INFO: ss-0 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:19:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:20:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:20:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:19:30 +0000 UTC }] May 14 14:20:13.120: INFO: ss-1 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:19:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:20:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:20:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:19:51 +0000 UTC }] May 14 14:20:13.120: INFO: ss-2 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:19:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:20:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:20:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:19:51 +0000 UTC }] May 14 14:20:13.120: INFO: May 14 14:20:13.120: INFO: StatefulSet ss has not reached scale 0, at 3 May 14 14:20:14.255: INFO: POD NODE PHASE GRACE CONDITIONS May 14 14:20:14.255: INFO: ss-0 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:19:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:20:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:20:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:19:30 +0000 UTC }] May 14 14:20:14.255: INFO: ss-1 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:19:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:20:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:20:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:19:51 +0000 UTC }] May 14 14:20:14.255: INFO: ss-2 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:19:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:20:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:20:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:19:51 +0000 UTC }] May 14 14:20:14.255: INFO: May 14 14:20:14.255: INFO: StatefulSet ss has not reached scale 0, at 3 May 14 14:20:15.338: INFO: POD NODE PHASE GRACE CONDITIONS May 14 14:20:15.338: INFO: ss-0 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:19:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:20:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:20:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:19:30 +0000 UTC }] May 14 14:20:15.338: INFO: ss-1 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:19:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:20:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:20:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:19:51 +0000 UTC }] May 14 14:20:15.338: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:19:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:20:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:20:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:19:51 +0000 UTC }] May 14 14:20:15.338: INFO: May 14 14:20:15.338: INFO: StatefulSet ss has not reached scale 0, at 3 May 14 14:20:16.343: INFO: POD NODE PHASE GRACE CONDITIONS May 14 14:20:16.343: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:19:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:20:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:20:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:19:30 +0000 UTC }] May 14 14:20:16.343: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:19:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:20:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:20:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:19:51 +0000 UTC }] May 14 14:20:16.343: INFO: May 14 14:20:16.343: INFO: StatefulSet ss has not reached scale 0, at 2 May 14 14:20:17.348: INFO: POD NODE PHASE GRACE CONDITIONS May 14 14:20:17.348: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:19:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:20:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:20:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:19:30 +0000 UTC }] May 14 14:20:17.348: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:19:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:20:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:20:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:19:51 +0000 UTC }] May 14 14:20:17.348: INFO: May 14 14:20:17.348: INFO: StatefulSet ss has not reached scale 0, at 2 May 14 14:20:18.352: INFO: POD NODE PHASE GRACE CONDITIONS May 14 14:20:18.352: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:19:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:20:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:20:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:19:30 +0000 UTC }] May 14 14:20:18.352: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:19:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:20:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:20:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:19:51 +0000 UTC }] May 14 14:20:18.352: INFO: May 14 14:20:18.352: INFO: StatefulSet ss has not reached scale 0, at 2 May 14 14:20:19.358: INFO: POD NODE PHASE GRACE CONDITIONS May 14 14:20:19.358: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:19:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:20:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:20:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:19:30 +0000 UTC }] May 14 14:20:19.358: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:19:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:20:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:20:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:19:51 +0000 UTC }] May 14 14:20:19.358: INFO: May 14 14:20:19.358: INFO: StatefulSet ss has not reached scale 0, at 2 May 14 14:20:20.361: INFO: POD NODE PHASE GRACE CONDITIONS May 14 14:20:20.361: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:19:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:20:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:20:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:19:30 +0000 UTC }] May 14 14:20:20.361: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:19:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:20:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:20:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:19:51 +0000 UTC }] May 14 14:20:20.361: INFO: May 14 14:20:20.361: INFO: StatefulSet ss has not reached scale 0, at 2 May 14 14:20:21.366: INFO: POD NODE PHASE GRACE CONDITIONS May 14 14:20:21.366: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:19:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:20:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:20:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:19:30 +0000 UTC }] May 14 14:20:21.366: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:19:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:20:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:20:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:19:51 +0000 UTC }] May 14 14:20:21.366: INFO: May 14 14:20:21.366: INFO: StatefulSet ss has not reached scale 0, at 2 May 14 14:20:22.370: INFO: Verifying statefulset ss doesn't scale past 0 for another 743.932461ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-2129 May 14 14:20:23.373: INFO: Scaling statefulset ss to 0 May 14 14:20:23.383: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 May 14 14:20:23.384: INFO: Deleting all statefulset in ns statefulset-2129 May 14 14:20:23.386: INFO: Scaling statefulset ss to 0 May 14 14:20:23.392: INFO: Waiting for statefulset status.replicas updated to 0 May 14 14:20:23.394: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:20:23.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2129" for this suite. May 14 14:20:29.443: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:20:29.537: INFO: namespace statefulset-2129 deletion completed in 6.130596985s • [SLOW TEST:58.673 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:20:29.538: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test use defaults May 14 14:20:29.637: INFO: Waiting up to 5m0s for pod "client-containers-c9186c1f-3c1c-43e2-80a1-a0a291b57e99" in namespace "containers-2957" to be "success or failure" May 14 14:20:29.655: INFO: Pod "client-containers-c9186c1f-3c1c-43e2-80a1-a0a291b57e99": Phase="Pending", Reason="", readiness=false. Elapsed: 17.641099ms May 14 14:20:31.659: INFO: Pod "client-containers-c9186c1f-3c1c-43e2-80a1-a0a291b57e99": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021674549s May 14 14:20:33.662: INFO: Pod "client-containers-c9186c1f-3c1c-43e2-80a1-a0a291b57e99": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025432065s STEP: Saw pod success May 14 14:20:33.662: INFO: Pod "client-containers-c9186c1f-3c1c-43e2-80a1-a0a291b57e99" satisfied condition "success or failure" May 14 14:20:33.664: INFO: Trying to get logs from node iruya-worker pod client-containers-c9186c1f-3c1c-43e2-80a1-a0a291b57e99 container test-container: STEP: delete the pod May 14 14:20:33.702: INFO: Waiting for pod client-containers-c9186c1f-3c1c-43e2-80a1-a0a291b57e99 to disappear May 14 14:20:33.730: INFO: Pod client-containers-c9186c1f-3c1c-43e2-80a1-a0a291b57e99 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:20:33.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2957" for this suite. May 14 14:20:39.816: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:20:39.889: INFO: namespace containers-2957 deletion completed in 6.155428025s • [SLOW TEST:10.351 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:20:39.889: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on node default medium May 14 14:20:39.984: INFO: Waiting up to 5m0s for pod "pod-64d52a02-ab31-4f55-a04f-884180544b2b" in namespace "emptydir-8341" to be "success or failure" May 14 14:20:40.001: INFO: Pod "pod-64d52a02-ab31-4f55-a04f-884180544b2b": Phase="Pending", Reason="", readiness=false. Elapsed: 17.178627ms May 14 14:20:42.006: INFO: Pod "pod-64d52a02-ab31-4f55-a04f-884180544b2b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021787064s May 14 14:20:44.010: INFO: Pod "pod-64d52a02-ab31-4f55-a04f-884180544b2b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025791991s STEP: Saw pod success May 14 14:20:44.010: INFO: Pod "pod-64d52a02-ab31-4f55-a04f-884180544b2b" satisfied condition "success or failure" May 14 14:20:44.013: INFO: Trying to get logs from node iruya-worker2 pod pod-64d52a02-ab31-4f55-a04f-884180544b2b container test-container: STEP: delete the pod May 14 14:20:44.333: INFO: Waiting for pod pod-64d52a02-ab31-4f55-a04f-884180544b2b to disappear May 14 14:20:44.342: INFO: Pod pod-64d52a02-ab31-4f55-a04f-884180544b2b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:20:44.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8341" for this suite. May 14 14:20:50.357: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:20:50.442: INFO: namespace emptydir-8341 deletion completed in 6.096082926s • [SLOW TEST:10.553 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:20:50.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-21865577-b4c8-4031-bff4-7d8d271ceddf STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:20:56.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4684" for this suite. May 14 14:21:22.634: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:21:22.703: INFO: namespace configmap-4684 deletion completed in 26.083337112s • [SLOW TEST:32.260 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:21:22.703: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-vq7k STEP: Creating a pod to test atomic-volume-subpath May 14 14:21:22.781: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-vq7k" in namespace "subpath-3479" to be "success or failure" May 14 14:21:22.823: INFO: Pod "pod-subpath-test-configmap-vq7k": Phase="Pending", Reason="", readiness=false. Elapsed: 41.859774ms May 14 14:21:24.826: INFO: Pod "pod-subpath-test-configmap-vq7k": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045168359s May 14 14:21:26.831: INFO: Pod "pod-subpath-test-configmap-vq7k": Phase="Running", Reason="", readiness=true. Elapsed: 4.049967509s May 14 14:21:28.835: INFO: Pod "pod-subpath-test-configmap-vq7k": Phase="Running", Reason="", readiness=true. Elapsed: 6.054527905s May 14 14:21:30.839: INFO: Pod "pod-subpath-test-configmap-vq7k": Phase="Running", Reason="", readiness=true. Elapsed: 8.058415881s May 14 14:21:32.842: INFO: Pod "pod-subpath-test-configmap-vq7k": Phase="Running", Reason="", readiness=true. Elapsed: 10.061514521s May 14 14:21:34.846: INFO: Pod "pod-subpath-test-configmap-vq7k": Phase="Running", Reason="", readiness=true. Elapsed: 12.064929065s May 14 14:21:36.849: INFO: Pod "pod-subpath-test-configmap-vq7k": Phase="Running", Reason="", readiness=true. Elapsed: 14.06791634s May 14 14:21:38.853: INFO: Pod "pod-subpath-test-configmap-vq7k": Phase="Running", Reason="", readiness=true. Elapsed: 16.072285785s May 14 14:21:40.858: INFO: Pod "pod-subpath-test-configmap-vq7k": Phase="Running", Reason="", readiness=true. Elapsed: 18.077159929s May 14 14:21:42.862: INFO: Pod "pod-subpath-test-configmap-vq7k": Phase="Running", Reason="", readiness=true. Elapsed: 20.08127299s May 14 14:21:44.865: INFO: Pod "pod-subpath-test-configmap-vq7k": Phase="Running", Reason="", readiness=true. Elapsed: 22.08446055s May 14 14:21:46.869: INFO: Pod "pod-subpath-test-configmap-vq7k": Phase="Running", Reason="", readiness=true. Elapsed: 24.088176466s May 14 14:21:48.873: INFO: Pod "pod-subpath-test-configmap-vq7k": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.092131841s STEP: Saw pod success May 14 14:21:48.873: INFO: Pod "pod-subpath-test-configmap-vq7k" satisfied condition "success or failure" May 14 14:21:48.876: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-configmap-vq7k container test-container-subpath-configmap-vq7k: STEP: delete the pod May 14 14:21:48.940: INFO: Waiting for pod pod-subpath-test-configmap-vq7k to disappear May 14 14:21:48.997: INFO: Pod pod-subpath-test-configmap-vq7k no longer exists STEP: Deleting pod pod-subpath-test-configmap-vq7k May 14 14:21:48.997: INFO: Deleting pod "pod-subpath-test-configmap-vq7k" in namespace "subpath-3479" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:21:48.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3479" for this suite. May 14 14:21:55.029: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:21:55.094: INFO: namespace subpath-3479 deletion completed in 6.092386151s • [SLOW TEST:32.391 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:21:55.095: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 14 14:21:59.369: INFO: Waiting up to 5m0s for pod "client-envvars-ef8acfaf-0483-4b25-9789-cfbddf642933" in namespace "pods-4860" to be "success or failure" May 14 14:21:59.371: INFO: Pod "client-envvars-ef8acfaf-0483-4b25-9789-cfbddf642933": Phase="Pending", Reason="", readiness=false. Elapsed: 2.247715ms May 14 14:22:01.375: INFO: Pod "client-envvars-ef8acfaf-0483-4b25-9789-cfbddf642933": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005878556s May 14 14:22:03.380: INFO: Pod "client-envvars-ef8acfaf-0483-4b25-9789-cfbddf642933": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011203265s STEP: Saw pod success May 14 14:22:03.380: INFO: Pod "client-envvars-ef8acfaf-0483-4b25-9789-cfbddf642933" satisfied condition "success or failure" May 14 14:22:03.384: INFO: Trying to get logs from node iruya-worker pod client-envvars-ef8acfaf-0483-4b25-9789-cfbddf642933 container env3cont: STEP: delete the pod May 14 14:22:03.452: INFO: Waiting for pod client-envvars-ef8acfaf-0483-4b25-9789-cfbddf642933 to disappear May 14 14:22:03.589: INFO: Pod client-envvars-ef8acfaf-0483-4b25-9789-cfbddf642933 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:22:03.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4860" for this suite. May 14 14:22:53.632: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:22:53.732: INFO: namespace pods-4860 deletion completed in 50.140399203s • [SLOW TEST:58.638 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:22:53.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 14 14:22:53.807: INFO: Pod name pod-release: Found 0 pods out of 1 May 14 14:22:58.818: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:22:59.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1181" for this suite. May 14 14:23:05.883: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:23:06.311: INFO: namespace replication-controller-1181 deletion completed in 6.461502313s • [SLOW TEST:12.578 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:23:06.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 May 14 14:23:06.456: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 14 14:23:06.463: INFO: Waiting for terminating namespaces to be deleted... May 14 14:23:06.465: INFO: Logging pods the kubelet thinks is on node iruya-worker before test May 14 14:23:06.470: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 14 14:23:06.470: INFO: Container kube-proxy ready: true, restart count 0 May 14 14:23:06.470: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 14 14:23:06.470: INFO: Container kindnet-cni ready: true, restart count 0 May 14 14:23:06.470: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test May 14 14:23:06.474: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) May 14 14:23:06.474: INFO: Container coredns ready: true, restart count 0 May 14 14:23:06.474: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) May 14 14:23:06.474: INFO: Container coredns ready: true, restart count 0 May 14 14:23:06.474: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) May 14 14:23:06.474: INFO: Container kindnet-cni ready: true, restart count 0 May 14 14:23:06.474: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) May 14 14:23:06.474: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.160eeb13441d3908], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:23:07.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1636" for this suite. May 14 14:23:13.509: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:23:13.585: INFO: namespace sched-pred-1636 deletion completed in 6.090424798s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:7.274 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:23:13.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 14 14:23:13.675: INFO: Waiting up to 5m0s for pod "downwardapi-volume-36ec6ce2-0d1a-4da5-9863-83f8e3395092" in namespace "downward-api-6256" to be "success or failure" May 14 14:23:13.677: INFO: Pod "downwardapi-volume-36ec6ce2-0d1a-4da5-9863-83f8e3395092": Phase="Pending", Reason="", readiness=false. Elapsed: 2.42894ms May 14 14:23:15.681: INFO: Pod "downwardapi-volume-36ec6ce2-0d1a-4da5-9863-83f8e3395092": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006437446s May 14 14:23:17.684: INFO: Pod "downwardapi-volume-36ec6ce2-0d1a-4da5-9863-83f8e3395092": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009692335s STEP: Saw pod success May 14 14:23:17.684: INFO: Pod "downwardapi-volume-36ec6ce2-0d1a-4da5-9863-83f8e3395092" satisfied condition "success or failure" May 14 14:23:17.687: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-36ec6ce2-0d1a-4da5-9863-83f8e3395092 container client-container: STEP: delete the pod May 14 14:23:17.805: INFO: Waiting for pod downwardapi-volume-36ec6ce2-0d1a-4da5-9863-83f8e3395092 to disappear May 14 14:23:17.877: INFO: Pod downwardapi-volume-36ec6ce2-0d1a-4da5-9863-83f8e3395092 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:23:17.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6256" for this suite. May 14 14:23:23.898: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:23:23.986: INFO: namespace downward-api-6256 deletion completed in 6.105447756s • [SLOW TEST:10.400 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:23:23.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 14 14:23:24.085: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4faa05cf-0bd9-4838-847e-1a9ce9c7e975" in namespace "projected-9798" to be "success or failure" May 14 14:23:24.117: INFO: Pod "downwardapi-volume-4faa05cf-0bd9-4838-847e-1a9ce9c7e975": Phase="Pending", Reason="", readiness=false. Elapsed: 32.083488ms May 14 14:23:26.120: INFO: Pod "downwardapi-volume-4faa05cf-0bd9-4838-847e-1a9ce9c7e975": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035466163s May 14 14:23:28.124: INFO: Pod "downwardapi-volume-4faa05cf-0bd9-4838-847e-1a9ce9c7e975": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03932862s STEP: Saw pod success May 14 14:23:28.124: INFO: Pod "downwardapi-volume-4faa05cf-0bd9-4838-847e-1a9ce9c7e975" satisfied condition "success or failure" May 14 14:23:28.127: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-4faa05cf-0bd9-4838-847e-1a9ce9c7e975 container client-container: STEP: delete the pod May 14 14:23:28.143: INFO: Waiting for pod downwardapi-volume-4faa05cf-0bd9-4838-847e-1a9ce9c7e975 to disappear May 14 14:23:28.165: INFO: Pod downwardapi-volume-4faa05cf-0bd9-4838-847e-1a9ce9c7e975 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:23:28.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9798" for this suite. May 14 14:23:34.208: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:23:34.348: INFO: namespace projected-9798 deletion completed in 6.179042242s • [SLOW TEST:10.362 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:23:34.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating api versions May 14 14:23:34.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' May 14 14:23:34.609: INFO: stderr: "" May 14 14:23:34.609: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:23:34.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5487" for this suite. May 14 14:23:40.676: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:23:40.740: INFO: namespace kubectl-5487 deletion completed in 6.127002472s • [SLOW TEST:6.391 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:23:40.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 14 14:23:40.813: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3d3392f6-ed69-426a-b43b-02411162397c" in namespace "projected-3745" to be "success or failure" May 14 14:23:40.862: INFO: Pod "downwardapi-volume-3d3392f6-ed69-426a-b43b-02411162397c": Phase="Pending", Reason="", readiness=false. Elapsed: 49.134504ms May 14 14:23:42.881: INFO: Pod "downwardapi-volume-3d3392f6-ed69-426a-b43b-02411162397c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067976712s May 14 14:23:44.909: INFO: Pod "downwardapi-volume-3d3392f6-ed69-426a-b43b-02411162397c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.095859487s May 14 14:23:46.913: INFO: Pod "downwardapi-volume-3d3392f6-ed69-426a-b43b-02411162397c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.100286342s STEP: Saw pod success May 14 14:23:46.913: INFO: Pod "downwardapi-volume-3d3392f6-ed69-426a-b43b-02411162397c" satisfied condition "success or failure" May 14 14:23:46.917: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-3d3392f6-ed69-426a-b43b-02411162397c container client-container: STEP: delete the pod May 14 14:23:46.950: INFO: Waiting for pod downwardapi-volume-3d3392f6-ed69-426a-b43b-02411162397c to disappear May 14 14:23:46.954: INFO: Pod downwardapi-volume-3d3392f6-ed69-426a-b43b-02411162397c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:23:46.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3745" for this suite. May 14 14:23:53.023: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:23:53.110: INFO: namespace projected-3745 deletion completed in 6.152318196s • [SLOW TEST:12.369 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:23:53.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs May 14 14:23:53.250: INFO: Waiting up to 5m0s for pod "pod-1080f5a0-9a30-4a14-8412-e8e9f8c50849" in namespace "emptydir-3883" to be "success or failure" May 14 14:23:53.268: INFO: Pod "pod-1080f5a0-9a30-4a14-8412-e8e9f8c50849": Phase="Pending", Reason="", readiness=false. Elapsed: 17.536045ms May 14 14:23:55.388: INFO: Pod "pod-1080f5a0-9a30-4a14-8412-e8e9f8c50849": Phase="Pending", Reason="", readiness=false. Elapsed: 2.137113406s May 14 14:23:57.392: INFO: Pod "pod-1080f5a0-9a30-4a14-8412-e8e9f8c50849": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.141414746s STEP: Saw pod success May 14 14:23:57.392: INFO: Pod "pod-1080f5a0-9a30-4a14-8412-e8e9f8c50849" satisfied condition "success or failure" May 14 14:23:57.395: INFO: Trying to get logs from node iruya-worker pod pod-1080f5a0-9a30-4a14-8412-e8e9f8c50849 container test-container: STEP: delete the pod May 14 14:23:57.545: INFO: Waiting for pod pod-1080f5a0-9a30-4a14-8412-e8e9f8c50849 to disappear May 14 14:23:57.559: INFO: Pod pod-1080f5a0-9a30-4a14-8412-e8e9f8c50849 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:23:57.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3883" for this suite. May 14 14:24:03.630: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:24:03.703: INFO: namespace emptydir-3883 deletion completed in 6.118439377s • [SLOW TEST:10.594 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:24:03.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:24:10.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4868" for this suite. May 14 14:24:16.222: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:24:16.297: INFO: namespace namespaces-4868 deletion completed in 6.099547176s STEP: Destroying namespace "nsdeletetest-3557" for this suite. May 14 14:24:16.299: INFO: Namespace nsdeletetest-3557 was already deleted STEP: Destroying namespace "nsdeletetest-396" for this suite. May 14 14:24:22.312: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:24:22.372: INFO: namespace nsdeletetest-396 deletion completed in 6.072431802s • [SLOW TEST:18.668 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:24:22.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false May 14 14:24:32.504: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5325 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 14 14:24:32.504: INFO: >>> kubeConfig: /root/.kube/config I0514 14:24:32.539389 6 log.go:172] (0xc0031b4790) (0xc003862320) Create stream I0514 14:24:32.539420 6 log.go:172] (0xc0031b4790) (0xc003862320) Stream added, broadcasting: 1 I0514 14:24:32.541797 6 log.go:172] (0xc0031b4790) Reply frame received for 1 I0514 14:24:32.541827 6 log.go:172] (0xc0031b4790) (0xc0038623c0) Create stream I0514 14:24:32.541834 6 log.go:172] (0xc0031b4790) (0xc0038623c0) Stream added, broadcasting: 3 I0514 14:24:32.542836 6 log.go:172] (0xc0031b4790) Reply frame received for 3 I0514 14:24:32.542875 6 log.go:172] (0xc0031b4790) (0xc003599cc0) Create stream I0514 14:24:32.542885 6 log.go:172] (0xc0031b4790) (0xc003599cc0) Stream added, broadcasting: 5 I0514 14:24:32.543970 6 log.go:172] (0xc0031b4790) Reply frame received for 5 I0514 14:24:32.607848 6 log.go:172] (0xc0031b4790) Data frame received for 3 I0514 14:24:32.607878 6 log.go:172] (0xc0038623c0) (3) Data frame handling I0514 14:24:32.607888 6 log.go:172] (0xc0038623c0) (3) Data frame sent I0514 14:24:32.607894 6 log.go:172] (0xc0031b4790) Data frame received for 3 I0514 14:24:32.607900 6 log.go:172] (0xc0038623c0) (3) Data frame handling I0514 14:24:32.608692 6 log.go:172] (0xc0031b4790) Data frame received for 5 I0514 14:24:32.608724 6 log.go:172] (0xc003599cc0) (5) Data frame handling I0514 14:24:32.610338 6 log.go:172] (0xc0031b4790) Data frame received for 1 I0514 14:24:32.610376 6 log.go:172] (0xc003862320) (1) Data frame handling I0514 14:24:32.610413 6 log.go:172] (0xc003862320) (1) Data frame sent I0514 14:24:32.610440 6 log.go:172] (0xc0031b4790) (0xc003862320) Stream removed, broadcasting: 1 I0514 14:24:32.610463 6 log.go:172] (0xc0031b4790) Go away received I0514 14:24:32.610747 6 log.go:172] (0xc0031b4790) (0xc003862320) Stream removed, broadcasting: 1 I0514 14:24:32.610761 6 log.go:172] (0xc0031b4790) (0xc0038623c0) Stream removed, broadcasting: 3 I0514 14:24:32.610770 6 log.go:172] (0xc0031b4790) (0xc003599cc0) Stream removed, broadcasting: 5 May 14 14:24:32.610: INFO: Exec stderr: "" May 14 14:24:32.610: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5325 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 14 14:24:32.610: INFO: >>> kubeConfig: /root/.kube/config I0514 14:24:32.636490 6 log.go:172] (0xc00266f4a0) (0xc003647540) Create stream I0514 14:24:32.636512 6 log.go:172] (0xc00266f4a0) (0xc003647540) Stream added, broadcasting: 1 I0514 14:24:32.644616 6 log.go:172] (0xc00266f4a0) Reply frame received for 1 I0514 14:24:32.644650 6 log.go:172] (0xc00266f4a0) (0xc001134000) Create stream I0514 14:24:32.644658 6 log.go:172] (0xc00266f4a0) (0xc001134000) Stream added, broadcasting: 3 I0514 14:24:32.645454 6 log.go:172] (0xc00266f4a0) Reply frame received for 3 I0514 14:24:32.645492 6 log.go:172] (0xc00266f4a0) (0xc001134140) Create stream I0514 14:24:32.645502 6 log.go:172] (0xc00266f4a0) (0xc001134140) Stream added, broadcasting: 5 I0514 14:24:32.646180 6 log.go:172] (0xc00266f4a0) Reply frame received for 5 I0514 14:24:32.698453 6 log.go:172] (0xc00266f4a0) Data frame received for 5 I0514 14:24:32.698500 6 log.go:172] (0xc001134140) (5) Data frame handling I0514 14:24:32.698532 6 log.go:172] (0xc00266f4a0) Data frame received for 3 I0514 14:24:32.698552 6 log.go:172] (0xc001134000) (3) Data frame handling I0514 14:24:32.698594 6 log.go:172] (0xc001134000) (3) Data frame sent I0514 14:24:32.698627 6 log.go:172] (0xc00266f4a0) Data frame received for 3 I0514 14:24:32.698641 6 log.go:172] (0xc001134000) (3) Data frame handling I0514 14:24:32.699775 6 log.go:172] (0xc00266f4a0) Data frame received for 1 I0514 14:24:32.699801 6 log.go:172] (0xc003647540) (1) Data frame handling I0514 14:24:32.699833 6 log.go:172] (0xc003647540) (1) Data frame sent I0514 14:24:32.699859 6 log.go:172] (0xc00266f4a0) (0xc003647540) Stream removed, broadcasting: 1 I0514 14:24:32.699878 6 log.go:172] (0xc00266f4a0) Go away received I0514 14:24:32.699976 6 log.go:172] (0xc00266f4a0) (0xc003647540) Stream removed, broadcasting: 1 I0514 14:24:32.700008 6 log.go:172] (0xc00266f4a0) (0xc001134000) Stream removed, broadcasting: 3 I0514 14:24:32.700030 6 log.go:172] (0xc00266f4a0) (0xc001134140) Stream removed, broadcasting: 5 May 14 14:24:32.700: INFO: Exec stderr: "" May 14 14:24:32.700: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5325 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 14 14:24:32.700: INFO: >>> kubeConfig: /root/.kube/config I0514 14:24:32.731084 6 log.go:172] (0xc0009d9a20) (0xc000a34820) Create stream I0514 14:24:32.731110 6 log.go:172] (0xc0009d9a20) (0xc000a34820) Stream added, broadcasting: 1 I0514 14:24:32.732463 6 log.go:172] (0xc0009d9a20) Reply frame received for 1 I0514 14:24:32.732517 6 log.go:172] (0xc0009d9a20) (0xc0003a40a0) Create stream I0514 14:24:32.732539 6 log.go:172] (0xc0009d9a20) (0xc0003a40a0) Stream added, broadcasting: 3 I0514 14:24:32.733581 6 log.go:172] (0xc0009d9a20) Reply frame received for 3 I0514 14:24:32.733624 6 log.go:172] (0xc0009d9a20) (0xc000a348c0) Create stream I0514 14:24:32.733639 6 log.go:172] (0xc0009d9a20) (0xc000a348c0) Stream added, broadcasting: 5 I0514 14:24:32.734389 6 log.go:172] (0xc0009d9a20) Reply frame received for 5 I0514 14:24:32.803813 6 log.go:172] (0xc0009d9a20) Data frame received for 3 I0514 14:24:32.803857 6 log.go:172] (0xc0003a40a0) (3) Data frame handling I0514 14:24:32.803896 6 log.go:172] (0xc0003a40a0) (3) Data frame sent I0514 14:24:32.803917 6 log.go:172] (0xc0009d9a20) Data frame received for 3 I0514 14:24:32.803936 6 log.go:172] (0xc0003a40a0) (3) Data frame handling I0514 14:24:32.803963 6 log.go:172] (0xc0009d9a20) Data frame received for 5 I0514 14:24:32.803981 6 log.go:172] (0xc000a348c0) (5) Data frame handling I0514 14:24:32.805693 6 log.go:172] (0xc0009d9a20) Data frame received for 1 I0514 14:24:32.805714 6 log.go:172] (0xc000a34820) (1) Data frame handling I0514 14:24:32.805729 6 log.go:172] (0xc000a34820) (1) Data frame sent I0514 14:24:32.805839 6 log.go:172] (0xc0009d9a20) (0xc000a34820) Stream removed, broadcasting: 1 I0514 14:24:32.805904 6 log.go:172] (0xc0009d9a20) (0xc000a34820) Stream removed, broadcasting: 1 I0514 14:24:32.805918 6 log.go:172] (0xc0009d9a20) (0xc0003a40a0) Stream removed, broadcasting: 3 I0514 14:24:32.805928 6 log.go:172] (0xc0009d9a20) (0xc000a348c0) Stream removed, broadcasting: 5 May 14 14:24:32.805: INFO: Exec stderr: "" I0514 14:24:32.805945 6 log.go:172] (0xc0009d9a20) Go away received May 14 14:24:32.805: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5325 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 14 14:24:32.805: INFO: >>> kubeConfig: /root/.kube/config I0514 14:24:32.837242 6 log.go:172] (0xc00266ea50) (0xc0031c0320) Create stream I0514 14:24:32.837275 6 log.go:172] (0xc00266ea50) (0xc0031c0320) Stream added, broadcasting: 1 I0514 14:24:32.839021 6 log.go:172] (0xc00266ea50) Reply frame received for 1 I0514 14:24:32.839059 6 log.go:172] (0xc00266ea50) (0xc000a34960) Create stream I0514 14:24:32.839073 6 log.go:172] (0xc00266ea50) (0xc000a34960) Stream added, broadcasting: 3 I0514 14:24:32.839838 6 log.go:172] (0xc00266ea50) Reply frame received for 3 I0514 14:24:32.839877 6 log.go:172] (0xc00266ea50) (0xc0003a4140) Create stream I0514 14:24:32.839887 6 log.go:172] (0xc00266ea50) (0xc0003a4140) Stream added, broadcasting: 5 I0514 14:24:32.840519 6 log.go:172] (0xc00266ea50) Reply frame received for 5 I0514 14:24:32.897306 6 log.go:172] (0xc00266ea50) Data frame received for 5 I0514 14:24:32.897329 6 log.go:172] (0xc0003a4140) (5) Data frame handling I0514 14:24:32.897365 6 log.go:172] (0xc00266ea50) Data frame received for 3 I0514 14:24:32.897385 6 log.go:172] (0xc000a34960) (3) Data frame handling I0514 14:24:32.897404 6 log.go:172] (0xc000a34960) (3) Data frame sent I0514 14:24:32.897415 6 log.go:172] (0xc00266ea50) Data frame received for 3 I0514 14:24:32.897423 6 log.go:172] (0xc000a34960) (3) Data frame handling I0514 14:24:32.898628 6 log.go:172] (0xc00266ea50) Data frame received for 1 I0514 14:24:32.898647 6 log.go:172] (0xc0031c0320) (1) Data frame handling I0514 14:24:32.898663 6 log.go:172] (0xc0031c0320) (1) Data frame sent I0514 14:24:32.898674 6 log.go:172] (0xc00266ea50) (0xc0031c0320) Stream removed, broadcasting: 1 I0514 14:24:32.898685 6 log.go:172] (0xc00266ea50) Go away received I0514 14:24:32.898786 6 log.go:172] (0xc00266ea50) (0xc0031c0320) Stream removed, broadcasting: 1 I0514 14:24:32.898798 6 log.go:172] (0xc00266ea50) (0xc000a34960) Stream removed, broadcasting: 3 I0514 14:24:32.898805 6 log.go:172] (0xc00266ea50) (0xc0003a4140) Stream removed, broadcasting: 5 May 14 14:24:32.898: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount May 14 14:24:32.898: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5325 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 14 14:24:32.898: INFO: >>> kubeConfig: /root/.kube/config I0514 14:24:32.923373 6 log.go:172] (0xc003adac60) (0xc001134960) Create stream I0514 14:24:32.923396 6 log.go:172] (0xc003adac60) (0xc001134960) Stream added, broadcasting: 1 I0514 14:24:32.925370 6 log.go:172] (0xc003adac60) Reply frame received for 1 I0514 14:24:32.925400 6 log.go:172] (0xc003adac60) (0xc001134b40) Create stream I0514 14:24:32.925411 6 log.go:172] (0xc003adac60) (0xc001134b40) Stream added, broadcasting: 3 I0514 14:24:32.926245 6 log.go:172] (0xc003adac60) Reply frame received for 3 I0514 14:24:32.926278 6 log.go:172] (0xc003adac60) (0xc001134f00) Create stream I0514 14:24:32.926289 6 log.go:172] (0xc003adac60) (0xc001134f00) Stream added, broadcasting: 5 I0514 14:24:32.926933 6 log.go:172] (0xc003adac60) Reply frame received for 5 I0514 14:24:32.986041 6 log.go:172] (0xc003adac60) Data frame received for 3 I0514 14:24:32.986080 6 log.go:172] (0xc001134b40) (3) Data frame handling I0514 14:24:32.986091 6 log.go:172] (0xc001134b40) (3) Data frame sent I0514 14:24:32.986098 6 log.go:172] (0xc003adac60) Data frame received for 3 I0514 14:24:32.986103 6 log.go:172] (0xc001134b40) (3) Data frame handling I0514 14:24:32.986126 6 log.go:172] (0xc003adac60) Data frame received for 5 I0514 14:24:32.986161 6 log.go:172] (0xc001134f00) (5) Data frame handling I0514 14:24:32.987031 6 log.go:172] (0xc003adac60) Data frame received for 1 I0514 14:24:32.987052 6 log.go:172] (0xc001134960) (1) Data frame handling I0514 14:24:32.987065 6 log.go:172] (0xc001134960) (1) Data frame sent I0514 14:24:32.987077 6 log.go:172] (0xc003adac60) (0xc001134960) Stream removed, broadcasting: 1 I0514 14:24:32.987133 6 log.go:172] (0xc003adac60) Go away received I0514 14:24:32.987186 6 log.go:172] (0xc003adac60) (0xc001134960) Stream removed, broadcasting: 1 I0514 14:24:32.987223 6 log.go:172] (0xc003adac60) (0xc001134b40) Stream removed, broadcasting: 3 I0514 14:24:32.987238 6 log.go:172] (0xc003adac60) (0xc001134f00) Stream removed, broadcasting: 5 May 14 14:24:32.987: INFO: Exec stderr: "" May 14 14:24:32.987: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5325 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 14 14:24:32.987: INFO: >>> kubeConfig: /root/.kube/config I0514 14:24:33.016855 6 log.go:172] (0xc00266fa20) (0xc0031c0640) Create stream I0514 14:24:33.016870 6 log.go:172] (0xc00266fa20) (0xc0031c0640) Stream added, broadcasting: 1 I0514 14:24:33.018593 6 log.go:172] (0xc00266fa20) Reply frame received for 1 I0514 14:24:33.018632 6 log.go:172] (0xc00266fa20) (0xc0003a43c0) Create stream I0514 14:24:33.018646 6 log.go:172] (0xc00266fa20) (0xc0003a43c0) Stream added, broadcasting: 3 I0514 14:24:33.019456 6 log.go:172] (0xc00266fa20) Reply frame received for 3 I0514 14:24:33.019517 6 log.go:172] (0xc00266fa20) (0xc0003a4aa0) Create stream I0514 14:24:33.019540 6 log.go:172] (0xc00266fa20) (0xc0003a4aa0) Stream added, broadcasting: 5 I0514 14:24:33.020336 6 log.go:172] (0xc00266fa20) Reply frame received for 5 I0514 14:24:33.084199 6 log.go:172] (0xc00266fa20) Data frame received for 3 I0514 14:24:33.084229 6 log.go:172] (0xc0003a43c0) (3) Data frame handling I0514 14:24:33.084240 6 log.go:172] (0xc0003a43c0) (3) Data frame sent I0514 14:24:33.084248 6 log.go:172] (0xc00266fa20) Data frame received for 3 I0514 14:24:33.084255 6 log.go:172] (0xc0003a43c0) (3) Data frame handling I0514 14:24:33.084289 6 log.go:172] (0xc00266fa20) Data frame received for 5 I0514 14:24:33.084395 6 log.go:172] (0xc0003a4aa0) (5) Data frame handling I0514 14:24:33.085672 6 log.go:172] (0xc00266fa20) Data frame received for 1 I0514 14:24:33.085689 6 log.go:172] (0xc0031c0640) (1) Data frame handling I0514 14:24:33.085698 6 log.go:172] (0xc0031c0640) (1) Data frame sent I0514 14:24:33.085779 6 log.go:172] (0xc00266fa20) (0xc0031c0640) Stream removed, broadcasting: 1 I0514 14:24:33.085808 6 log.go:172] (0xc00266fa20) Go away received I0514 14:24:33.085897 6 log.go:172] (0xc00266fa20) (0xc0031c0640) Stream removed, broadcasting: 1 I0514 14:24:33.085934 6 log.go:172] (0xc00266fa20) (0xc0003a43c0) Stream removed, broadcasting: 3 I0514 14:24:33.085967 6 log.go:172] (0xc00266fa20) (0xc0003a4aa0) Stream removed, broadcasting: 5 May 14 14:24:33.085: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true May 14 14:24:33.086: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5325 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 14 14:24:33.086: INFO: >>> kubeConfig: /root/.kube/config I0514 14:24:33.121484 6 log.go:172] (0xc000f102c0) (0xc0031c0960) Create stream I0514 14:24:33.121513 6 log.go:172] (0xc000f102c0) (0xc0031c0960) Stream added, broadcasting: 1 I0514 14:24:33.129892 6 log.go:172] (0xc000f102c0) Reply frame received for 1 I0514 14:24:33.129939 6 log.go:172] (0xc000f102c0) (0xc0003a4b40) Create stream I0514 14:24:33.129950 6 log.go:172] (0xc000f102c0) (0xc0003a4b40) Stream added, broadcasting: 3 I0514 14:24:33.131639 6 log.go:172] (0xc000f102c0) Reply frame received for 3 I0514 14:24:33.131666 6 log.go:172] (0xc000f102c0) (0xc000a34be0) Create stream I0514 14:24:33.131676 6 log.go:172] (0xc000f102c0) (0xc000a34be0) Stream added, broadcasting: 5 I0514 14:24:33.133237 6 log.go:172] (0xc000f102c0) Reply frame received for 5 I0514 14:24:33.194020 6 log.go:172] (0xc000f102c0) Data frame received for 5 I0514 14:24:33.194042 6 log.go:172] (0xc000a34be0) (5) Data frame handling I0514 14:24:33.194071 6 log.go:172] (0xc000f102c0) Data frame received for 3 I0514 14:24:33.194080 6 log.go:172] (0xc0003a4b40) (3) Data frame handling I0514 14:24:33.194098 6 log.go:172] (0xc0003a4b40) (3) Data frame sent I0514 14:24:33.194105 6 log.go:172] (0xc000f102c0) Data frame received for 3 I0514 14:24:33.194114 6 log.go:172] (0xc0003a4b40) (3) Data frame handling I0514 14:24:33.195019 6 log.go:172] (0xc000f102c0) Data frame received for 1 I0514 14:24:33.195040 6 log.go:172] (0xc0031c0960) (1) Data frame handling I0514 14:24:33.195050 6 log.go:172] (0xc0031c0960) (1) Data frame sent I0514 14:24:33.195111 6 log.go:172] (0xc000f102c0) (0xc0031c0960) Stream removed, broadcasting: 1 I0514 14:24:33.195175 6 log.go:172] (0xc000f102c0) (0xc0031c0960) Stream removed, broadcasting: 1 I0514 14:24:33.195188 6 log.go:172] (0xc000f102c0) (0xc0003a4b40) Stream removed, broadcasting: 3 I0514 14:24:33.195289 6 log.go:172] (0xc000f102c0) (0xc000a34be0) Stream removed, broadcasting: 5 May 14 14:24:33.195: INFO: Exec stderr: "" May 14 14:24:33.195: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5325 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 14 14:24:33.195: INFO: >>> kubeConfig: /root/.kube/config I0514 14:24:33.197333 6 log.go:172] (0xc000f102c0) Go away received I0514 14:24:33.219827 6 log.go:172] (0xc000f10c60) (0xc0031c0aa0) Create stream I0514 14:24:33.219857 6 log.go:172] (0xc000f10c60) (0xc0031c0aa0) Stream added, broadcasting: 1 I0514 14:24:33.221353 6 log.go:172] (0xc000f10c60) Reply frame received for 1 I0514 14:24:33.221385 6 log.go:172] (0xc000f10c60) (0xc000a34f00) Create stream I0514 14:24:33.221393 6 log.go:172] (0xc000f10c60) (0xc000a34f00) Stream added, broadcasting: 3 I0514 14:24:33.221953 6 log.go:172] (0xc000f10c60) Reply frame received for 3 I0514 14:24:33.221978 6 log.go:172] (0xc000f10c60) (0xc000a34fa0) Create stream I0514 14:24:33.221988 6 log.go:172] (0xc000f10c60) (0xc000a34fa0) Stream added, broadcasting: 5 I0514 14:24:33.222468 6 log.go:172] (0xc000f10c60) Reply frame received for 5 I0514 14:24:33.276462 6 log.go:172] (0xc000f10c60) Data frame received for 3 I0514 14:24:33.276482 6 log.go:172] (0xc000a34f00) (3) Data frame handling I0514 14:24:33.276510 6 log.go:172] (0xc000a34f00) (3) Data frame sent I0514 14:24:33.276517 6 log.go:172] (0xc000f10c60) Data frame received for 3 I0514 14:24:33.276522 6 log.go:172] (0xc000a34f00) (3) Data frame handling I0514 14:24:33.276581 6 log.go:172] (0xc000f10c60) Data frame received for 5 I0514 14:24:33.276600 6 log.go:172] (0xc000a34fa0) (5) Data frame handling I0514 14:24:33.278133 6 log.go:172] (0xc000f10c60) Data frame received for 1 I0514 14:24:33.278155 6 log.go:172] (0xc0031c0aa0) (1) Data frame handling I0514 14:24:33.278168 6 log.go:172] (0xc0031c0aa0) (1) Data frame sent I0514 14:24:33.278184 6 log.go:172] (0xc000f10c60) (0xc0031c0aa0) Stream removed, broadcasting: 1 I0514 14:24:33.278232 6 log.go:172] (0xc000f10c60) Go away received I0514 14:24:33.278270 6 log.go:172] (0xc000f10c60) (0xc0031c0aa0) Stream removed, broadcasting: 1 I0514 14:24:33.278289 6 log.go:172] (0xc000f10c60) (0xc000a34f00) Stream removed, broadcasting: 3 I0514 14:24:33.278303 6 log.go:172] (0xc000f10c60) (0xc000a34fa0) Stream removed, broadcasting: 5 May 14 14:24:33.278: INFO: Exec stderr: "" May 14 14:24:33.278: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5325 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 14 14:24:33.278: INFO: >>> kubeConfig: /root/.kube/config I0514 14:24:33.305580 6 log.go:172] (0xc00070fe40) (0xc000a355e0) Create stream I0514 14:24:33.305606 6 log.go:172] (0xc00070fe40) (0xc000a355e0) Stream added, broadcasting: 1 I0514 14:24:33.307217 6 log.go:172] (0xc00070fe40) Reply frame received for 1 I0514 14:24:33.307256 6 log.go:172] (0xc00070fe40) (0xc00059a140) Create stream I0514 14:24:33.307271 6 log.go:172] (0xc00070fe40) (0xc00059a140) Stream added, broadcasting: 3 I0514 14:24:33.308026 6 log.go:172] (0xc00070fe40) Reply frame received for 3 I0514 14:24:33.308052 6 log.go:172] (0xc00070fe40) (0xc0003a4be0) Create stream I0514 14:24:33.308066 6 log.go:172] (0xc00070fe40) (0xc0003a4be0) Stream added, broadcasting: 5 I0514 14:24:33.308828 6 log.go:172] (0xc00070fe40) Reply frame received for 5 I0514 14:24:33.372728 6 log.go:172] (0xc00070fe40) Data frame received for 5 I0514 14:24:33.372754 6 log.go:172] (0xc0003a4be0) (5) Data frame handling I0514 14:24:33.372791 6 log.go:172] (0xc00070fe40) Data frame received for 3 I0514 14:24:33.372831 6 log.go:172] (0xc00059a140) (3) Data frame handling I0514 14:24:33.372854 6 log.go:172] (0xc00059a140) (3) Data frame sent I0514 14:24:33.372885 6 log.go:172] (0xc00070fe40) Data frame received for 3 I0514 14:24:33.372912 6 log.go:172] (0xc00059a140) (3) Data frame handling I0514 14:24:33.373811 6 log.go:172] (0xc00070fe40) Data frame received for 1 I0514 14:24:33.373846 6 log.go:172] (0xc000a355e0) (1) Data frame handling I0514 14:24:33.373878 6 log.go:172] (0xc000a355e0) (1) Data frame sent I0514 14:24:33.373900 6 log.go:172] (0xc00070fe40) (0xc000a355e0) Stream removed, broadcasting: 1 I0514 14:24:33.373922 6 log.go:172] (0xc00070fe40) Go away received I0514 14:24:33.374019 6 log.go:172] (0xc00070fe40) (0xc000a355e0) Stream removed, broadcasting: 1 I0514 14:24:33.374041 6 log.go:172] (0xc00070fe40) (0xc00059a140) Stream removed, broadcasting: 3 I0514 14:24:33.374049 6 log.go:172] (0xc00070fe40) (0xc0003a4be0) Stream removed, broadcasting: 5 May 14 14:24:33.374: INFO: Exec stderr: "" May 14 14:24:33.374: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5325 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 14 14:24:33.374: INFO: >>> kubeConfig: /root/.kube/config I0514 14:24:33.403852 6 log.go:172] (0xc0016166e0) (0xc0003a52c0) Create stream I0514 14:24:33.403892 6 log.go:172] (0xc0016166e0) (0xc0003a52c0) Stream added, broadcasting: 1 I0514 14:24:33.406221 6 log.go:172] (0xc0016166e0) Reply frame received for 1 I0514 14:24:33.406267 6 log.go:172] (0xc0016166e0) (0xc001134fa0) Create stream I0514 14:24:33.406288 6 log.go:172] (0xc0016166e0) (0xc001134fa0) Stream added, broadcasting: 3 I0514 14:24:33.407132 6 log.go:172] (0xc0016166e0) Reply frame received for 3 I0514 14:24:33.407162 6 log.go:172] (0xc0016166e0) (0xc0031c0b40) Create stream I0514 14:24:33.407172 6 log.go:172] (0xc0016166e0) (0xc0031c0b40) Stream added, broadcasting: 5 I0514 14:24:33.407990 6 log.go:172] (0xc0016166e0) Reply frame received for 5 I0514 14:24:33.457839 6 log.go:172] (0xc0016166e0) Data frame received for 5 I0514 14:24:33.457872 6 log.go:172] (0xc0031c0b40) (5) Data frame handling I0514 14:24:33.457901 6 log.go:172] (0xc0016166e0) Data frame received for 3 I0514 14:24:33.457911 6 log.go:172] (0xc001134fa0) (3) Data frame handling I0514 14:24:33.457923 6 log.go:172] (0xc001134fa0) (3) Data frame sent I0514 14:24:33.457933 6 log.go:172] (0xc0016166e0) Data frame received for 3 I0514 14:24:33.457943 6 log.go:172] (0xc001134fa0) (3) Data frame handling I0514 14:24:33.459253 6 log.go:172] (0xc0016166e0) Data frame received for 1 I0514 14:24:33.459268 6 log.go:172] (0xc0003a52c0) (1) Data frame handling I0514 14:24:33.459275 6 log.go:172] (0xc0003a52c0) (1) Data frame sent I0514 14:24:33.459284 6 log.go:172] (0xc0016166e0) (0xc0003a52c0) Stream removed, broadcasting: 1 I0514 14:24:33.459318 6 log.go:172] (0xc0016166e0) Go away received I0514 14:24:33.459394 6 log.go:172] (0xc0016166e0) (0xc0003a52c0) Stream removed, broadcasting: 1 I0514 14:24:33.459420 6 log.go:172] (0xc0016166e0) (0xc001134fa0) Stream removed, broadcasting: 3 I0514 14:24:33.459435 6 log.go:172] (0xc0016166e0) (0xc0031c0b40) Stream removed, broadcasting: 5 May 14 14:24:33.459: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:24:33.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-5325" for this suite. May 14 14:25:27.496: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:25:27.578: INFO: namespace e2e-kubelet-etc-hosts-5325 deletion completed in 54.114676648s • [SLOW TEST:65.205 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:25:27.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 14 14:25:32.173: INFO: Successfully updated pod "pod-update-fc7a2b23-5c6e-479c-9bfb-a2e678a58454" STEP: verifying the updated pod is in kubernetes May 14 14:25:32.229: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:25:32.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3212" for this suite. May 14 14:25:54.244: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:25:54.380: INFO: namespace pods-3212 deletion completed in 22.147731405s • [SLOW TEST:26.803 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:25:54.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-227406d1-043f-4dfd-8ece-69ab915eab7e STEP: Creating a pod to test consume secrets May 14 14:25:54.522: INFO: Waiting up to 5m0s for pod "pod-secrets-018c8202-7d12-40e5-8241-6720d6eb15f7" in namespace "secrets-330" to be "success or failure" May 14 14:25:54.540: INFO: Pod "pod-secrets-018c8202-7d12-40e5-8241-6720d6eb15f7": Phase="Pending", Reason="", readiness=false. Elapsed: 17.733348ms May 14 14:25:56.544: INFO: Pod "pod-secrets-018c8202-7d12-40e5-8241-6720d6eb15f7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021986232s May 14 14:25:58.549: INFO: Pod "pod-secrets-018c8202-7d12-40e5-8241-6720d6eb15f7": Phase="Running", Reason="", readiness=true. Elapsed: 4.026897313s May 14 14:26:00.553: INFO: Pod "pod-secrets-018c8202-7d12-40e5-8241-6720d6eb15f7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.030979803s STEP: Saw pod success May 14 14:26:00.553: INFO: Pod "pod-secrets-018c8202-7d12-40e5-8241-6720d6eb15f7" satisfied condition "success or failure" May 14 14:26:00.556: INFO: Trying to get logs from node iruya-worker pod pod-secrets-018c8202-7d12-40e5-8241-6720d6eb15f7 container secret-volume-test: STEP: delete the pod May 14 14:26:00.606: INFO: Waiting for pod pod-secrets-018c8202-7d12-40e5-8241-6720d6eb15f7 to disappear May 14 14:26:00.616: INFO: Pod pod-secrets-018c8202-7d12-40e5-8241-6720d6eb15f7 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:26:00.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-330" for this suite. May 14 14:26:06.802: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:26:06.878: INFO: namespace secrets-330 deletion completed in 6.258479713s • [SLOW TEST:12.497 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:26:06.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 14 14:26:07.267: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9ca4f944-6c64-490d-afa4-92799ef5464b" in namespace "downward-api-5528" to be "success or failure" May 14 14:26:07.271: INFO: Pod "downwardapi-volume-9ca4f944-6c64-490d-afa4-92799ef5464b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090966ms May 14 14:26:09.274: INFO: Pod "downwardapi-volume-9ca4f944-6c64-490d-afa4-92799ef5464b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007369777s May 14 14:26:11.278: INFO: Pod "downwardapi-volume-9ca4f944-6c64-490d-afa4-92799ef5464b": Phase="Running", Reason="", readiness=true. Elapsed: 4.011380696s May 14 14:26:13.282: INFO: Pod "downwardapi-volume-9ca4f944-6c64-490d-afa4-92799ef5464b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014943953s STEP: Saw pod success May 14 14:26:13.282: INFO: Pod "downwardapi-volume-9ca4f944-6c64-490d-afa4-92799ef5464b" satisfied condition "success or failure" May 14 14:26:13.284: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-9ca4f944-6c64-490d-afa4-92799ef5464b container client-container: STEP: delete the pod May 14 14:26:13.410: INFO: Waiting for pod downwardapi-volume-9ca4f944-6c64-490d-afa4-92799ef5464b to disappear May 14 14:26:13.421: INFO: Pod downwardapi-volume-9ca4f944-6c64-490d-afa4-92799ef5464b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:26:13.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5528" for this suite. May 14 14:26:19.444: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:26:19.507: INFO: namespace downward-api-5528 deletion completed in 6.081454269s • [SLOW TEST:12.629 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:26:19.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5002.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5002.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5002.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5002.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 14 14:26:27.677: INFO: DNS probes using dns-test-eed21d84-7b17-4f88-8eac-3216d7e16460 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5002.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5002.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5002.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5002.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 14 14:26:34.509: INFO: File wheezy_udp@dns-test-service-3.dns-5002.svc.cluster.local from pod dns-5002/dns-test-9ef1de3d-01b2-4e06-9ea7-748d6661e0f8 contains 'foo.example.com. ' instead of 'bar.example.com.' May 14 14:26:34.540: INFO: File jessie_udp@dns-test-service-3.dns-5002.svc.cluster.local from pod dns-5002/dns-test-9ef1de3d-01b2-4e06-9ea7-748d6661e0f8 contains 'foo.example.com. ' instead of 'bar.example.com.' May 14 14:26:34.540: INFO: Lookups using dns-5002/dns-test-9ef1de3d-01b2-4e06-9ea7-748d6661e0f8 failed for: [wheezy_udp@dns-test-service-3.dns-5002.svc.cluster.local jessie_udp@dns-test-service-3.dns-5002.svc.cluster.local] May 14 14:26:39.544: INFO: File wheezy_udp@dns-test-service-3.dns-5002.svc.cluster.local from pod dns-5002/dns-test-9ef1de3d-01b2-4e06-9ea7-748d6661e0f8 contains 'foo.example.com. ' instead of 'bar.example.com.' May 14 14:26:39.548: INFO: File jessie_udp@dns-test-service-3.dns-5002.svc.cluster.local from pod dns-5002/dns-test-9ef1de3d-01b2-4e06-9ea7-748d6661e0f8 contains 'foo.example.com. ' instead of 'bar.example.com.' May 14 14:26:39.548: INFO: Lookups using dns-5002/dns-test-9ef1de3d-01b2-4e06-9ea7-748d6661e0f8 failed for: [wheezy_udp@dns-test-service-3.dns-5002.svc.cluster.local jessie_udp@dns-test-service-3.dns-5002.svc.cluster.local] May 14 14:26:44.548: INFO: File wheezy_udp@dns-test-service-3.dns-5002.svc.cluster.local from pod dns-5002/dns-test-9ef1de3d-01b2-4e06-9ea7-748d6661e0f8 contains 'foo.example.com. ' instead of 'bar.example.com.' May 14 14:26:44.557: INFO: File jessie_udp@dns-test-service-3.dns-5002.svc.cluster.local from pod dns-5002/dns-test-9ef1de3d-01b2-4e06-9ea7-748d6661e0f8 contains 'foo.example.com. ' instead of 'bar.example.com.' May 14 14:26:44.557: INFO: Lookups using dns-5002/dns-test-9ef1de3d-01b2-4e06-9ea7-748d6661e0f8 failed for: [wheezy_udp@dns-test-service-3.dns-5002.svc.cluster.local jessie_udp@dns-test-service-3.dns-5002.svc.cluster.local] May 14 14:26:49.545: INFO: File wheezy_udp@dns-test-service-3.dns-5002.svc.cluster.local from pod dns-5002/dns-test-9ef1de3d-01b2-4e06-9ea7-748d6661e0f8 contains 'foo.example.com. ' instead of 'bar.example.com.' May 14 14:26:49.549: INFO: File jessie_udp@dns-test-service-3.dns-5002.svc.cluster.local from pod dns-5002/dns-test-9ef1de3d-01b2-4e06-9ea7-748d6661e0f8 contains 'foo.example.com. ' instead of 'bar.example.com.' May 14 14:26:49.549: INFO: Lookups using dns-5002/dns-test-9ef1de3d-01b2-4e06-9ea7-748d6661e0f8 failed for: [wheezy_udp@dns-test-service-3.dns-5002.svc.cluster.local jessie_udp@dns-test-service-3.dns-5002.svc.cluster.local] May 14 14:26:54.544: INFO: File wheezy_udp@dns-test-service-3.dns-5002.svc.cluster.local from pod dns-5002/dns-test-9ef1de3d-01b2-4e06-9ea7-748d6661e0f8 contains 'foo.example.com. ' instead of 'bar.example.com.' May 14 14:26:54.547: INFO: File jessie_udp@dns-test-service-3.dns-5002.svc.cluster.local from pod dns-5002/dns-test-9ef1de3d-01b2-4e06-9ea7-748d6661e0f8 contains 'foo.example.com. ' instead of 'bar.example.com.' May 14 14:26:54.547: INFO: Lookups using dns-5002/dns-test-9ef1de3d-01b2-4e06-9ea7-748d6661e0f8 failed for: [wheezy_udp@dns-test-service-3.dns-5002.svc.cluster.local jessie_udp@dns-test-service-3.dns-5002.svc.cluster.local] May 14 14:26:59.549: INFO: DNS probes using dns-test-9ef1de3d-01b2-4e06-9ea7-748d6661e0f8 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5002.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-5002.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5002.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-5002.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 14 14:27:08.546: INFO: DNS probes using dns-test-fe85b597-87a2-4009-b4d8-6164834aadd4 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:27:08.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5002" for this suite. May 14 14:27:14.818: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:27:14.884: INFO: namespace dns-5002 deletion completed in 6.115373261s • [SLOW TEST:55.377 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:27:14.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod May 14 14:27:21.540: INFO: Successfully updated pod "annotationupdateac07a021-7c1b-4533-b0d5-a9889cd54bb8" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:27:23.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4404" for this suite. May 14 14:27:45.583: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:27:45.657: INFO: namespace downward-api-4404 deletion completed in 22.094454553s • [SLOW TEST:30.772 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:27:45.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 14 14:27:53.784: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 14 14:27:53.810: INFO: Pod pod-with-poststart-exec-hook still exists May 14 14:27:55.810: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 14 14:27:55.813: INFO: Pod pod-with-poststart-exec-hook still exists May 14 14:27:57.810: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 14 14:27:57.815: INFO: Pod pod-with-poststart-exec-hook still exists May 14 14:27:59.810: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 14 14:27:59.815: INFO: Pod pod-with-poststart-exec-hook still exists May 14 14:28:01.810: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 14 14:28:01.815: INFO: Pod pod-with-poststart-exec-hook still exists May 14 14:28:03.810: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 14 14:28:03.814: INFO: Pod pod-with-poststart-exec-hook still exists May 14 14:28:05.810: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 14 14:28:05.814: INFO: Pod pod-with-poststart-exec-hook still exists May 14 14:28:07.810: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 14 14:28:07.814: INFO: Pod pod-with-poststart-exec-hook still exists May 14 14:28:09.810: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 14 14:28:09.815: INFO: Pod pod-with-poststart-exec-hook still exists May 14 14:28:11.810: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 14 14:28:11.814: INFO: Pod pod-with-poststart-exec-hook still exists May 14 14:28:13.810: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 14 14:28:13.815: INFO: Pod pod-with-poststart-exec-hook still exists May 14 14:28:15.810: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 14 14:28:15.814: INFO: Pod pod-with-poststart-exec-hook still exists May 14 14:28:17.810: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 14 14:28:17.814: INFO: Pod pod-with-poststart-exec-hook still exists May 14 14:28:19.810: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 14 14:28:19.814: INFO: Pod pod-with-poststart-exec-hook still exists May 14 14:28:21.810: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 14 14:28:21.814: INFO: Pod pod-with-poststart-exec-hook still exists May 14 14:28:23.810: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 14 14:28:23.814: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:28:23.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4415" for this suite. May 14 14:28:45.880: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:28:45.953: INFO: namespace container-lifecycle-hook-4415 deletion completed in 22.134593222s • [SLOW TEST:60.295 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:28:45.953: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 14 14:28:46.038: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f2505e6a-92d8-4042-a7a1-4ef1fb4bb630" in namespace "projected-8373" to be "success or failure" May 14 14:28:46.042: INFO: Pod "downwardapi-volume-f2505e6a-92d8-4042-a7a1-4ef1fb4bb630": Phase="Pending", Reason="", readiness=false. Elapsed: 3.982144ms May 14 14:28:48.046: INFO: Pod "downwardapi-volume-f2505e6a-92d8-4042-a7a1-4ef1fb4bb630": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008135455s May 14 14:28:50.051: INFO: Pod "downwardapi-volume-f2505e6a-92d8-4042-a7a1-4ef1fb4bb630": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012787931s May 14 14:28:52.055: INFO: Pod "downwardapi-volume-f2505e6a-92d8-4042-a7a1-4ef1fb4bb630": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017423486s STEP: Saw pod success May 14 14:28:52.055: INFO: Pod "downwardapi-volume-f2505e6a-92d8-4042-a7a1-4ef1fb4bb630" satisfied condition "success or failure" May 14 14:28:52.059: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-f2505e6a-92d8-4042-a7a1-4ef1fb4bb630 container client-container: STEP: delete the pod May 14 14:28:52.090: INFO: Waiting for pod downwardapi-volume-f2505e6a-92d8-4042-a7a1-4ef1fb4bb630 to disappear May 14 14:28:52.104: INFO: Pod downwardapi-volume-f2505e6a-92d8-4042-a7a1-4ef1fb4bb630 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:28:52.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8373" for this suite. May 14 14:28:58.120: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:28:58.226: INFO: namespace projected-8373 deletion completed in 6.11806805s • [SLOW TEST:12.273 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:28:58.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 14 14:29:01.327: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:29:01.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9138" for this suite. May 14 14:29:07.605: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:29:07.663: INFO: namespace container-runtime-9138 deletion completed in 6.316962623s • [SLOW TEST:9.437 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:29:07.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 14 14:29:36.016: INFO: Container started at 2020-05-14 14:29:10 +0000 UTC, pod became ready at 2020-05-14 14:29:34 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:29:36.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5607" for this suite. May 14 14:29:50.035: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:29:50.121: INFO: namespace container-probe-5607 deletion completed in 14.10095841s • [SLOW TEST:42.458 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:29:50.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium May 14 14:29:50.239: INFO: Waiting up to 5m0s for pod "pod-135fc2bc-fb44-493b-926b-20ccd0fc06cf" in namespace "emptydir-3380" to be "success or failure" May 14 14:29:50.266: INFO: Pod "pod-135fc2bc-fb44-493b-926b-20ccd0fc06cf": Phase="Pending", Reason="", readiness=false. Elapsed: 26.129146ms May 14 14:29:52.270: INFO: Pod "pod-135fc2bc-fb44-493b-926b-20ccd0fc06cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030388379s May 14 14:29:54.274: INFO: Pod "pod-135fc2bc-fb44-493b-926b-20ccd0fc06cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034398126s STEP: Saw pod success May 14 14:29:54.274: INFO: Pod "pod-135fc2bc-fb44-493b-926b-20ccd0fc06cf" satisfied condition "success or failure" May 14 14:29:54.277: INFO: Trying to get logs from node iruya-worker2 pod pod-135fc2bc-fb44-493b-926b-20ccd0fc06cf container test-container: STEP: delete the pod May 14 14:29:54.327: INFO: Waiting for pod pod-135fc2bc-fb44-493b-926b-20ccd0fc06cf to disappear May 14 14:29:54.332: INFO: Pod pod-135fc2bc-fb44-493b-926b-20ccd0fc06cf no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:29:54.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3380" for this suite. May 14 14:30:00.354: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:30:00.428: INFO: namespace emptydir-3380 deletion completed in 6.093200092s • [SLOW TEST:10.306 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:30:00.428: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs May 14 14:30:00.557: INFO: Waiting up to 5m0s for pod "pod-668ac32c-2321-4662-a927-9c55786b9a4b" in namespace "emptydir-6314" to be "success or failure" May 14 14:30:00.560: INFO: Pod "pod-668ac32c-2321-4662-a927-9c55786b9a4b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.310728ms May 14 14:30:02.565: INFO: Pod "pod-668ac32c-2321-4662-a927-9c55786b9a4b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007519543s May 14 14:30:04.568: INFO: Pod "pod-668ac32c-2321-4662-a927-9c55786b9a4b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01126744s STEP: Saw pod success May 14 14:30:04.568: INFO: Pod "pod-668ac32c-2321-4662-a927-9c55786b9a4b" satisfied condition "success or failure" May 14 14:30:04.571: INFO: Trying to get logs from node iruya-worker pod pod-668ac32c-2321-4662-a927-9c55786b9a4b container test-container: STEP: delete the pod May 14 14:30:04.593: INFO: Waiting for pod pod-668ac32c-2321-4662-a927-9c55786b9a4b to disappear May 14 14:30:04.602: INFO: Pod pod-668ac32c-2321-4662-a927-9c55786b9a4b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:30:04.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6314" for this suite. May 14 14:30:10.660: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:30:10.767: INFO: namespace emptydir-6314 deletion completed in 6.16159068s • [SLOW TEST:10.339 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:30:10.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the initial replication controller May 14 14:30:10.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2438' May 14 14:30:14.157: INFO: stderr: "" May 14 14:30:14.157: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 14 14:30:14.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2438' May 14 14:30:14.282: INFO: stderr: "" May 14 14:30:14.282: INFO: stdout: "update-demo-nautilus-kr9tr update-demo-nautilus-xr85m " May 14 14:30:14.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kr9tr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2438' May 14 14:30:14.403: INFO: stderr: "" May 14 14:30:14.403: INFO: stdout: "" May 14 14:30:14.403: INFO: update-demo-nautilus-kr9tr is created but not running May 14 14:30:19.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2438' May 14 14:30:19.501: INFO: stderr: "" May 14 14:30:19.501: INFO: stdout: "update-demo-nautilus-kr9tr update-demo-nautilus-xr85m " May 14 14:30:19.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kr9tr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2438' May 14 14:30:19.595: INFO: stderr: "" May 14 14:30:19.595: INFO: stdout: "true" May 14 14:30:19.595: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kr9tr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2438' May 14 14:30:19.696: INFO: stderr: "" May 14 14:30:19.696: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 14 14:30:19.696: INFO: validating pod update-demo-nautilus-kr9tr May 14 14:30:19.699: INFO: got data: { "image": "nautilus.jpg" } May 14 14:30:19.699: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 14 14:30:19.699: INFO: update-demo-nautilus-kr9tr is verified up and running May 14 14:30:19.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xr85m -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2438' May 14 14:30:19.799: INFO: stderr: "" May 14 14:30:19.799: INFO: stdout: "true" May 14 14:30:19.799: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xr85m -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2438' May 14 14:30:19.897: INFO: stderr: "" May 14 14:30:19.897: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 14 14:30:19.897: INFO: validating pod update-demo-nautilus-xr85m May 14 14:30:19.901: INFO: got data: { "image": "nautilus.jpg" } May 14 14:30:19.901: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 14 14:30:19.902: INFO: update-demo-nautilus-xr85m is verified up and running STEP: rolling-update to new replication controller May 14 14:30:19.903: INFO: scanned /root for discovery docs: May 14 14:30:19.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-2438' May 14 14:30:42.533: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 14 14:30:42.533: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. May 14 14:30:42.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2438' May 14 14:30:42.626: INFO: stderr: "" May 14 14:30:42.626: INFO: stdout: "update-demo-kitten-5lvp6 update-demo-kitten-swv6t " May 14 14:30:42.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-5lvp6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2438' May 14 14:30:42.727: INFO: stderr: "" May 14 14:30:42.727: INFO: stdout: "true" May 14 14:30:42.727: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-5lvp6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2438' May 14 14:30:42.819: INFO: stderr: "" May 14 14:30:42.819: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 14 14:30:42.819: INFO: validating pod update-demo-kitten-5lvp6 May 14 14:30:42.899: INFO: got data: { "image": "kitten.jpg" } May 14 14:30:42.899: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 14 14:30:42.899: INFO: update-demo-kitten-5lvp6 is verified up and running May 14 14:30:42.899: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-swv6t -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2438' May 14 14:30:42.995: INFO: stderr: "" May 14 14:30:42.995: INFO: stdout: "true" May 14 14:30:42.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-swv6t -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2438' May 14 14:30:43.091: INFO: stderr: "" May 14 14:30:43.091: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 14 14:30:43.092: INFO: validating pod update-demo-kitten-swv6t May 14 14:30:43.096: INFO: got data: { "image": "kitten.jpg" } May 14 14:30:43.096: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 14 14:30:43.096: INFO: update-demo-kitten-swv6t is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:30:43.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2438" for this suite. May 14 14:31:07.134: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:31:07.230: INFO: namespace kubectl-2438 deletion completed in 24.130540068s • [SLOW TEST:56.463 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:31:07.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service multi-endpoint-test in namespace services-5799 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5799 to expose endpoints map[] May 14 14:31:07.341: INFO: Get endpoints failed (12.98862ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found May 14 14:31:08.370: INFO: successfully validated that service multi-endpoint-test in namespace services-5799 exposes endpoints map[] (1.042669376s elapsed) STEP: Creating pod pod1 in namespace services-5799 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5799 to expose endpoints map[pod1:[100]] May 14 14:31:12.611: INFO: successfully validated that service multi-endpoint-test in namespace services-5799 exposes endpoints map[pod1:[100]] (4.235116993s elapsed) STEP: Creating pod pod2 in namespace services-5799 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5799 to expose endpoints map[pod1:[100] pod2:[101]] May 14 14:31:15.702: INFO: successfully validated that service multi-endpoint-test in namespace services-5799 exposes endpoints map[pod1:[100] pod2:[101]] (3.08697172s elapsed) STEP: Deleting pod pod1 in namespace services-5799 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5799 to expose endpoints map[pod2:[101]] May 14 14:31:16.748: INFO: successfully validated that service multi-endpoint-test in namespace services-5799 exposes endpoints map[pod2:[101]] (1.040928313s elapsed) STEP: Deleting pod pod2 in namespace services-5799 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5799 to expose endpoints map[] May 14 14:31:17.763: INFO: successfully validated that service multi-endpoint-test in namespace services-5799 exposes endpoints map[] (1.011609447s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:31:17.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5799" for this suite. May 14 14:31:24.052: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:31:24.122: INFO: namespace services-5799 deletion completed in 6.140376877s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:16.891 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:31:24.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210 STEP: creating the pod May 14 14:31:24.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8188' May 14 14:31:24.588: INFO: stderr: "" May 14 14:31:24.588: INFO: stdout: "pod/pause created\n" May 14 14:31:24.588: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 14 14:31:24.588: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-8188" to be "running and ready" May 14 14:31:24.592: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 3.17971ms May 14 14:31:26.596: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00772053s May 14 14:31:28.600: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.011353006s May 14 14:31:28.600: INFO: Pod "pause" satisfied condition "running and ready" May 14 14:31:28.600: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: adding the label testing-label with value testing-label-value to a pod May 14 14:31:28.600: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-8188' May 14 14:31:28.702: INFO: stderr: "" May 14 14:31:28.702: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 14 14:31:28.702: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-8188' May 14 14:31:28.794: INFO: stderr: "" May 14 14:31:28.794: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod May 14 14:31:28.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-8188' May 14 14:31:28.903: INFO: stderr: "" May 14 14:31:28.903: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 14 14:31:28.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-8188' May 14 14:31:29.004: INFO: stderr: "" May 14 14:31:29.004: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217 STEP: using delete to clean up resources May 14 14:31:29.004: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8188' May 14 14:31:29.260: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 14 14:31:29.260: INFO: stdout: "pod \"pause\" force deleted\n" May 14 14:31:29.261: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-8188' May 14 14:31:29.358: INFO: stderr: "No resources found.\n" May 14 14:31:29.358: INFO: stdout: "" May 14 14:31:29.358: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-8188 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 14 14:31:29.485: INFO: stderr: "" May 14 14:31:29.485: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:31:29.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8188" for this suite. May 14 14:31:35.544: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:31:35.603: INFO: namespace kubectl-8188 deletion completed in 6.113651501s • [SLOW TEST:11.481 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:31:35.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:31:40.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7887" for this suite. May 14 14:32:24.099: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:32:24.182: INFO: namespace kubelet-test-7887 deletion completed in 44.134656701s • [SLOW TEST:48.579 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:32:24.183: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token STEP: reading a file in the container May 14 14:32:28.818: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5330 pod-service-account-4d9184d8-db0c-4d45-8ead-13e3d113f220 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container May 14 14:32:29.058: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5330 pod-service-account-4d9184d8-db0c-4d45-8ead-13e3d113f220 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container May 14 14:32:29.266: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5330 pod-service-account-4d9184d8-db0c-4d45-8ead-13e3d113f220 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:32:29.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-5330" for this suite. May 14 14:32:35.509: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:32:35.565: INFO: namespace svcaccounts-5330 deletion completed in 6.097975975s • [SLOW TEST:11.382 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:32:35.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 14 14:32:39.719: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-5e64682d-d37e-4538-a2e0-4fcf9faa6a5f,GenerateName:,Namespace:events-9649,SelfLink:/api/v1/namespaces/events-9649/pods/send-events-5e64682d-d37e-4538-a2e0-4fcf9faa6a5f,UID:1cd488ef-86ac-4543-b1df-7e11f6d9b0c0,ResourceVersion:10872735,Generation:0,CreationTimestamp:2020-05-14 14:32:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 667454026,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cbrkv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cbrkv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-cbrkv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00345e300} {node.kubernetes.io/unreachable Exists NoExecute 0xc00345e320}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:32:35 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:32:39 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:32:39 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 14:32:35 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.247,StartTime:2020-05-14 14:32:35 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-05-14 14:32:39 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://9c70114bddebd02b44dc86ea47a4db6a162bef7d34a84d66caffba3b0de938de}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod May 14 14:32:41.723: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 14 14:32:43.727: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:32:43.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-9649" for this suite. May 14 14:33:21.754: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:33:21.836: INFO: namespace events-9649 deletion completed in 38.096850337s • [SLOW TEST:46.271 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:33:21.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token May 14 14:33:22.444: INFO: created pod pod-service-account-defaultsa May 14 14:33:22.444: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 14 14:33:22.451: INFO: created pod pod-service-account-mountsa May 14 14:33:22.451: INFO: pod pod-service-account-mountsa service account token volume mount: true May 14 14:33:22.489: INFO: created pod pod-service-account-nomountsa May 14 14:33:22.489: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 14 14:33:22.506: INFO: created pod pod-service-account-defaultsa-mountspec May 14 14:33:22.506: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 14 14:33:22.566: INFO: created pod pod-service-account-mountsa-mountspec May 14 14:33:22.566: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 14 14:33:22.604: INFO: created pod pod-service-account-nomountsa-mountspec May 14 14:33:22.604: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 14 14:33:22.627: INFO: created pod pod-service-account-defaultsa-nomountspec May 14 14:33:22.627: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 14 14:33:22.704: INFO: created pod pod-service-account-mountsa-nomountspec May 14 14:33:22.704: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 14 14:33:22.720: INFO: created pod pod-service-account-nomountsa-nomountspec May 14 14:33:22.720: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:33:22.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-8021" for this suite. May 14 14:33:52.892: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:33:52.975: INFO: namespace svcaccounts-8021 deletion completed in 30.176821155s • [SLOW TEST:31.139 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:33:52.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating all guestbook components May 14 14:33:53.058: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend May 14 14:33:53.058: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1352' May 14 14:33:53.457: INFO: stderr: "" May 14 14:33:53.457: INFO: stdout: "service/redis-slave created\n" May 14 14:33:53.457: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend May 14 14:33:53.457: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1352' May 14 14:33:53.726: INFO: stderr: "" May 14 14:33:53.726: INFO: stdout: "service/redis-master created\n" May 14 14:33:53.727: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 14 14:33:53.727: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1352' May 14 14:33:54.105: INFO: stderr: "" May 14 14:33:54.105: INFO: stdout: "service/frontend created\n" May 14 14:33:54.105: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 May 14 14:33:54.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1352' May 14 14:33:54.400: INFO: stderr: "" May 14 14:33:54.400: INFO: stdout: "deployment.apps/frontend created\n" May 14 14:33:54.400: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-master spec: replicas: 1 selector: matchLabels: app: redis role: master tier: backend template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 14 14:33:54.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1352' May 14 14:33:54.732: INFO: stderr: "" May 14 14:33:54.732: INFO: stdout: "deployment.apps/redis-master created\n" May 14 14:33:54.732: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 selector: matchLabels: app: redis role: slave tier: backend template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 May 14 14:33:54.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1352' May 14 14:33:55.044: INFO: stderr: "" May 14 14:33:55.044: INFO: stdout: "deployment.apps/redis-slave created\n" STEP: validating guestbook app May 14 14:33:55.044: INFO: Waiting for all frontend pods to be Running. May 14 14:34:05.094: INFO: Waiting for frontend to serve content. May 14 14:34:05.110: INFO: Trying to add a new entry to the guestbook. May 14 14:34:05.121: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources May 14 14:34:05.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1352' May 14 14:34:05.374: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 14 14:34:05.374: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources May 14 14:34:05.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1352' May 14 14:34:05.635: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 14 14:34:05.635: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources May 14 14:34:05.635: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1352' May 14 14:34:05.811: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 14 14:34:05.811: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 14 14:34:05.811: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1352' May 14 14:34:05.919: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 14 14:34:05.919: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources May 14 14:34:05.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1352' May 14 14:34:06.027: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 14 14:34:06.027: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n" STEP: using delete to clean up resources May 14 14:34:06.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1352' May 14 14:34:06.226: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 14 14:34:06.226: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:34:06.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1352" for this suite. May 14 14:34:46.274: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:34:46.359: INFO: namespace kubectl-1352 deletion completed in 40.099896452s • [SLOW TEST:53.384 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:34:46.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-downwardapi-8q9w STEP: Creating a pod to test atomic-volume-subpath May 14 14:34:46.482: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-8q9w" in namespace "subpath-7329" to be "success or failure" May 14 14:34:46.498: INFO: Pod "pod-subpath-test-downwardapi-8q9w": Phase="Pending", Reason="", readiness=false. Elapsed: 16.33359ms May 14 14:34:48.502: INFO: Pod "pod-subpath-test-downwardapi-8q9w": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020398729s May 14 14:34:50.506: INFO: Pod "pod-subpath-test-downwardapi-8q9w": Phase="Running", Reason="", readiness=true. Elapsed: 4.024464185s May 14 14:34:52.509: INFO: Pod "pod-subpath-test-downwardapi-8q9w": Phase="Running", Reason="", readiness=true. Elapsed: 6.027665683s May 14 14:34:54.514: INFO: Pod "pod-subpath-test-downwardapi-8q9w": Phase="Running", Reason="", readiness=true. Elapsed: 8.032756329s May 14 14:34:56.519: INFO: Pod "pod-subpath-test-downwardapi-8q9w": Phase="Running", Reason="", readiness=true. Elapsed: 10.03701523s May 14 14:34:58.524: INFO: Pod "pod-subpath-test-downwardapi-8q9w": Phase="Running", Reason="", readiness=true. Elapsed: 12.042111679s May 14 14:35:00.528: INFO: Pod "pod-subpath-test-downwardapi-8q9w": Phase="Running", Reason="", readiness=true. Elapsed: 14.046565876s May 14 14:35:02.543: INFO: Pod "pod-subpath-test-downwardapi-8q9w": Phase="Running", Reason="", readiness=true. Elapsed: 16.061337118s May 14 14:35:04.546: INFO: Pod "pod-subpath-test-downwardapi-8q9w": Phase="Running", Reason="", readiness=true. Elapsed: 18.064717064s May 14 14:35:06.551: INFO: Pod "pod-subpath-test-downwardapi-8q9w": Phase="Running", Reason="", readiness=true. Elapsed: 20.069036138s May 14 14:35:08.555: INFO: Pod "pod-subpath-test-downwardapi-8q9w": Phase="Running", Reason="", readiness=true. Elapsed: 22.073732885s May 14 14:35:10.560: INFO: Pod "pod-subpath-test-downwardapi-8q9w": Phase="Running", Reason="", readiness=true. Elapsed: 24.078316016s May 14 14:35:12.565: INFO: Pod "pod-subpath-test-downwardapi-8q9w": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.083343933s STEP: Saw pod success May 14 14:35:12.565: INFO: Pod "pod-subpath-test-downwardapi-8q9w" satisfied condition "success or failure" May 14 14:35:12.568: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-downwardapi-8q9w container test-container-subpath-downwardapi-8q9w: STEP: delete the pod May 14 14:35:12.596: INFO: Waiting for pod pod-subpath-test-downwardapi-8q9w to disappear May 14 14:35:12.612: INFO: Pod pod-subpath-test-downwardapi-8q9w no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-8q9w May 14 14:35:12.612: INFO: Deleting pod "pod-subpath-test-downwardapi-8q9w" in namespace "subpath-7329" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:35:12.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7329" for this suite. May 14 14:35:18.645: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:35:18.728: INFO: namespace subpath-7329 deletion completed in 6.098619043s • [SLOW TEST:32.369 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:35:18.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-528d071e-0a0e-4a13-bc95-6adce49b8f01 STEP: Creating a pod to test consume configMaps May 14 14:35:18.824: INFO: Waiting up to 5m0s for pod "pod-configmaps-e448489a-ae6f-4f14-808c-e21ef5d93670" in namespace "configmap-7628" to be "success or failure" May 14 14:35:18.828: INFO: Pod "pod-configmaps-e448489a-ae6f-4f14-808c-e21ef5d93670": Phase="Pending", Reason="", readiness=false. Elapsed: 3.388389ms May 14 14:35:20.832: INFO: Pod "pod-configmaps-e448489a-ae6f-4f14-808c-e21ef5d93670": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008181201s May 14 14:35:22.888: INFO: Pod "pod-configmaps-e448489a-ae6f-4f14-808c-e21ef5d93670": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.063442699s STEP: Saw pod success May 14 14:35:22.888: INFO: Pod "pod-configmaps-e448489a-ae6f-4f14-808c-e21ef5d93670" satisfied condition "success or failure" May 14 14:35:22.911: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-e448489a-ae6f-4f14-808c-e21ef5d93670 container configmap-volume-test: STEP: delete the pod May 14 14:35:22.936: INFO: Waiting for pod pod-configmaps-e448489a-ae6f-4f14-808c-e21ef5d93670 to disappear May 14 14:35:23.040: INFO: Pod pod-configmaps-e448489a-ae6f-4f14-808c-e21ef5d93670 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:35:23.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7628" for this suite. May 14 14:35:29.064: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:35:29.207: INFO: namespace configmap-7628 deletion completed in 6.16335665s • [SLOW TEST:10.479 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:35:29.208: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-glmqw in namespace proxy-1426 I0514 14:35:29.612403 6 runners.go:180] Created replication controller with name: proxy-service-glmqw, namespace: proxy-1426, replica count: 1 I0514 14:35:30.662747 6 runners.go:180] proxy-service-glmqw Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0514 14:35:31.662976 6 runners.go:180] proxy-service-glmqw Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0514 14:35:32.663184 6 runners.go:180] proxy-service-glmqw Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0514 14:35:33.663417 6 runners.go:180] proxy-service-glmqw Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0514 14:35:34.663667 6 runners.go:180] proxy-service-glmqw Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0514 14:35:35.663869 6 runners.go:180] proxy-service-glmqw Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0514 14:35:36.664085 6 runners.go:180] proxy-service-glmqw Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0514 14:35:37.664454 6 runners.go:180] proxy-service-glmqw Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0514 14:35:38.664688 6 runners.go:180] proxy-service-glmqw Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0514 14:35:39.664875 6 runners.go:180] proxy-service-glmqw Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0514 14:35:40.665056 6 runners.go:180] proxy-service-glmqw Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 14 14:35:40.668: INFO: setup took 11.140792255s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 14 14:35:40.677: INFO: (0) /api/v1/namespaces/proxy-1426/pods/proxy-service-glmqw-bbvvk:1080/proxy/: test<... (200; 9.4391ms) May 14 14:35:40.677: INFO: (0) /api/v1/namespaces/proxy-1426/pods/http:proxy-service-glmqw-bbvvk:160/proxy/: foo (200; 9.708154ms) May 14 14:35:40.677: INFO: (0) /api/v1/namespaces/proxy-1426/pods/proxy-service-glmqw-bbvvk:162/proxy/: bar (200; 9.657819ms) May 14 14:35:40.677: INFO: (0) /api/v1/namespaces/proxy-1426/pods/http:proxy-service-glmqw-bbvvk:162/proxy/: bar (200; 9.702786ms) May 14 14:35:40.677: INFO: (0) /api/v1/namespaces/proxy-1426/pods/http:proxy-service-glmqw-bbvvk:1080/proxy/: ... (200; 9.685022ms) May 14 14:35:40.678: INFO: (0) /api/v1/namespaces/proxy-1426/services/proxy-service-glmqw:portname2/proxy/: bar (200; 9.937405ms) May 14 14:35:40.678: INFO: (0) /api/v1/namespaces/proxy-1426/pods/proxy-service-glmqw-bbvvk:160/proxy/: foo (200; 9.877159ms) May 14 14:35:40.678: INFO: (0) /api/v1/namespaces/proxy-1426/services/http:proxy-service-glmqw:portname2/proxy/: bar (200; 9.801472ms) May 14 14:35:40.678: INFO: (0) /api/v1/namespaces/proxy-1426/services/http:proxy-service-glmqw:portname1/proxy/: foo (200; 9.958512ms) May 14 14:35:40.678: INFO: (0) /api/v1/namespaces/proxy-1426/services/proxy-service-glmqw:portname1/proxy/: foo (200; 10.299303ms) May 14 14:35:40.679: INFO: (0) /api/v1/namespaces/proxy-1426/pods/proxy-service-glmqw-bbvvk/proxy/: test (200; 10.903985ms) May 14 14:35:40.682: INFO: (0) /api/v1/namespaces/proxy-1426/pods/https:proxy-service-glmqw-bbvvk:443/proxy/: test (200; 3.142827ms) May 14 14:35:40.688: INFO: (1) /api/v1/namespaces/proxy-1426/pods/https:proxy-service-glmqw-bbvvk:443/proxy/: ... (200; 3.690549ms) May 14 14:35:40.688: INFO: (1) /api/v1/namespaces/proxy-1426/pods/https:proxy-service-glmqw-bbvvk:460/proxy/: tls baz (200; 3.68092ms) May 14 14:35:40.688: INFO: (1) /api/v1/namespaces/proxy-1426/pods/http:proxy-service-glmqw-bbvvk:162/proxy/: bar (200; 3.782252ms) May 14 14:35:40.688: INFO: (1) /api/v1/namespaces/proxy-1426/services/http:proxy-service-glmqw:portname1/proxy/: foo (200; 3.846935ms) May 14 14:35:40.688: INFO: (1) /api/v1/namespaces/proxy-1426/pods/https:proxy-service-glmqw-bbvvk:462/proxy/: tls qux (200; 3.877276ms) May 14 14:35:40.688: INFO: (1) /api/v1/namespaces/proxy-1426/pods/proxy-service-glmqw-bbvvk:1080/proxy/: test<... (200; 3.869725ms) May 14 14:35:40.688: INFO: (1) /api/v1/namespaces/proxy-1426/pods/http:proxy-service-glmqw-bbvvk:160/proxy/: foo (200; 4.160199ms) May 14 14:35:40.689: INFO: (1) /api/v1/namespaces/proxy-1426/pods/proxy-service-glmqw-bbvvk:162/proxy/: bar (200; 4.660437ms) May 14 14:35:40.689: INFO: (1) /api/v1/namespaces/proxy-1426/services/proxy-service-glmqw:portname1/proxy/: foo (200; 5.273971ms) May 14 14:35:40.690: INFO: (1) /api/v1/namespaces/proxy-1426/services/https:proxy-service-glmqw:tlsportname1/proxy/: tls baz (200; 5.390714ms) May 14 14:35:40.690: INFO: (1) /api/v1/namespaces/proxy-1426/services/https:proxy-service-glmqw:tlsportname2/proxy/: tls qux (200; 5.40714ms) May 14 14:35:40.692: INFO: (1) /api/v1/namespaces/proxy-1426/services/http:proxy-service-glmqw:portname2/proxy/: bar (200; 8.181236ms) May 14 14:35:40.695: INFO: (2) /api/v1/namespaces/proxy-1426/pods/proxy-service-glmqw-bbvvk/proxy/: test (200; 2.97008ms) May 14 14:35:40.695: INFO: (2) /api/v1/namespaces/proxy-1426/pods/proxy-service-glmqw-bbvvk:1080/proxy/: test<... (200; 2.952521ms) May 14 14:35:40.695: INFO: (2) /api/v1/namespaces/proxy-1426/pods/https:proxy-service-glmqw-bbvvk:462/proxy/: tls qux (200; 3.14825ms) May 14 14:35:40.696: INFO: (2) /api/v1/namespaces/proxy-1426/pods/http:proxy-service-glmqw-bbvvk:162/proxy/: bar (200; 3.229271ms) May 14 14:35:40.696: INFO: (2) /api/v1/namespaces/proxy-1426/pods/http:proxy-service-glmqw-bbvvk:1080/proxy/: ... (200; 3.632594ms) May 14 14:35:40.696: INFO: (2) /api/v1/namespaces/proxy-1426/services/proxy-service-glmqw:portname2/proxy/: bar (200; 3.724167ms) May 14 14:35:40.696: INFO: (2) /api/v1/namespaces/proxy-1426/pods/proxy-service-glmqw-bbvvk:160/proxy/: foo (200; 3.713722ms) May 14 14:35:40.696: INFO: (2) /api/v1/namespaces/proxy-1426/pods/http:proxy-service-glmqw-bbvvk:160/proxy/: foo (200; 3.746408ms) May 14 14:35:40.696: INFO: (2) /api/v1/namespaces/proxy-1426/pods/proxy-service-glmqw-bbvvk:162/proxy/: bar (200; 3.661115ms) May 14 14:35:40.696: INFO: (2) /api/v1/namespaces/proxy-1426/pods/https:proxy-service-glmqw-bbvvk:443/proxy/: ... (200; 2.154654ms) May 14 14:35:40.700: INFO: (3) /api/v1/namespaces/proxy-1426/pods/proxy-service-glmqw-bbvvk:1080/proxy/: test<... (200; 3.536251ms) May 14 14:35:40.701: INFO: (3) /api/v1/namespaces/proxy-1426/pods/https:proxy-service-glmqw-bbvvk:443/proxy/: test (200; 4.12646ms) May 14 14:35:40.701: INFO: (3) /api/v1/namespaces/proxy-1426/services/http:proxy-service-glmqw:portname1/proxy/: foo (200; 4.08938ms) May 14 14:35:40.701: INFO: (3) /api/v1/namespaces/proxy-1426/services/https:proxy-service-glmqw:tlsportname2/proxy/: tls qux (200; 4.405701ms) May 14 14:35:40.701: INFO: (3) /api/v1/namespaces/proxy-1426/pods/https:proxy-service-glmqw-bbvvk:462/proxy/: tls qux (200; 4.463938ms) May 14 14:35:40.701: INFO: (3) /api/v1/namespaces/proxy-1426/services/http:proxy-service-glmqw:portname2/proxy/: bar (200; 4.43442ms) May 14 14:35:40.701: INFO: (3) /api/v1/namespaces/proxy-1426/services/proxy-service-glmqw:portname1/proxy/: foo (200; 4.473838ms) May 14 14:35:40.701: INFO: (3) /api/v1/namespaces/proxy-1426/pods/proxy-service-glmqw-bbvvk:162/proxy/: bar (200; 4.49066ms) May 14 14:35:40.702: INFO: (3) /api/v1/namespaces/proxy-1426/pods/http:proxy-service-glmqw-bbvvk:162/proxy/: bar (200; 4.674354ms) May 14 14:35:40.702: INFO: (3) /api/v1/namespaces/proxy-1426/services/https:proxy-service-glmqw:tlsportname1/proxy/: tls baz (200; 4.676079ms) May 14 14:35:40.704: INFO: (4) /api/v1/namespaces/proxy-1426/pods/proxy-service-glmqw-bbvvk:1080/proxy/: test<... (200; 2.259366ms) May 14 14:35:40.704: INFO: (4) /api/v1/namespaces/proxy-1426/pods/https:proxy-service-glmqw-bbvvk:443/proxy/: ... (200; 2.518742ms) May 14 14:35:40.704: INFO: (4) /api/v1/namespaces/proxy-1426/pods/https:proxy-service-glmqw-bbvvk:462/proxy/: tls qux (200; 2.488223ms) May 14 14:35:40.706: INFO: (4) /api/v1/namespaces/proxy-1426/pods/http:proxy-service-glmqw-bbvvk:162/proxy/: bar (200; 3.97842ms) May 14 14:35:40.706: INFO: (4) /api/v1/namespaces/proxy-1426/pods/proxy-service-glmqw-bbvvk:160/proxy/: foo (200; 4.007032ms) May 14 14:35:40.706: INFO: (4) /api/v1/namespaces/proxy-1426/pods/https:proxy-service-glmqw-bbvvk:460/proxy/: tls baz (200; 4.048753ms) May 14 14:35:40.706: INFO: (4) /api/v1/namespaces/proxy-1426/pods/proxy-service-glmqw-bbvvk:162/proxy/: bar (200; 4.053453ms) May 14 14:35:40.706: INFO: (4) /api/v1/namespaces/proxy-1426/pods/proxy-service-glmqw-bbvvk/proxy/: test (200; 4.098318ms) May 14 14:35:40.706: INFO: (4) /api/v1/namespaces/proxy-1426/pods/http:proxy-service-glmqw-bbvvk:160/proxy/: foo (200; 4.080056ms) May 14 14:35:40.707: INFO: (4) /api/v1/namespaces/proxy-1426/services/proxy-service-glmqw:portname2/proxy/: bar (200; 4.937617ms) May 14 14:35:40.707: INFO: (4) /api/v1/namespaces/proxy-1426/services/https:proxy-service-glmqw:tlsportname1/proxy/: tls baz (200; 5.099152ms) May 14 14:35:40.707: INFO: (4) /api/v1/namespaces/proxy-1426/services/https:proxy-service-glmqw:tlsportname2/proxy/: tls qux (200; 5.477403ms) May 14 14:35:40.707: INFO: (4) /api/v1/namespaces/proxy-1426/services/http:proxy-service-glmqw:portname2/proxy/: bar (200; 5.557407ms) May 14 14:35:40.707: INFO: (4) /api/v1/namespaces/proxy-1426/services/proxy-service-glmqw:portname1/proxy/: foo (200; 5.533451ms) May 14 14:35:40.707: INFO: (4) /api/v1/namespaces/proxy-1426/services/http:proxy-service-glmqw:portname1/proxy/: foo (200; 5.563433ms) May 14 14:35:40.709: INFO: (5) /api/v1/namespaces/proxy-1426/pods/https:proxy-service-glmqw-bbvvk:460/proxy/: tls baz (200; 1.783913ms) May 14 14:35:40.712: INFO: (5) /api/v1/namespaces/proxy-1426/services/https:proxy-service-glmqw:tlsportname1/proxy/: tls baz (200; 4.282425ms) May 14 14:35:40.712: INFO: (5) /api/v1/namespaces/proxy-1426/services/http:proxy-service-glmqw:portname2/proxy/: bar (200; 4.729113ms) May 14 14:35:40.712: INFO: (5) /api/v1/namespaces/proxy-1426/pods/proxy-service-glmqw-bbvvk:1080/proxy/: test<... (200; 4.73857ms) May 14 14:35:40.712: INFO: (5) /api/v1/namespaces/proxy-1426/pods/proxy-service-glmqw-bbvvk/proxy/: test (200; 4.775199ms) May 14 14:35:40.712: INFO: (5) /api/v1/namespaces/proxy-1426/services/proxy-service-glmqw:portname2/proxy/: bar (200; 4.805549ms) May 14 14:35:40.712: INFO: (5) /api/v1/namespaces/proxy-1426/services/http:proxy-service-glmqw:portname1/proxy/: foo (200; 4.776816ms) May 14 14:35:40.712: INFO: (5) /api/v1/namespaces/proxy-1426/pods/https:proxy-service-glmqw-bbvvk:443/proxy/: ... (200; 4.839298ms) May 14 14:35:40.712: INFO: (5) /api/v1/namespaces/proxy-1426/pods/proxy-service-glmqw-bbvvk:160/proxy/: foo (200; 4.830336ms) May 14 14:35:40.712: INFO: (5) /api/v1/namespaces/proxy-1426/pods/http:proxy-service-glmqw-bbvvk:160/proxy/: foo (200; 4.914733ms) May 14 14:35:40.712: INFO: (5) /api/v1/namespaces/proxy-1426/pods/proxy-service-glmqw-bbvvk:162/proxy/: bar (200; 4.992817ms) May 14 14:35:40.712: INFO: (5) /api/v1/namespaces/proxy-1426/services/proxy-service-glmqw:portname1/proxy/: foo (200; 4.933816ms) May 14 14:35:40.714: INFO: (6) /api/v1/namespaces/proxy-1426/pods/proxy-service-glmqw-bbvvk:160/proxy/: foo (200; 2.09848ms) May 14 14:35:40.715: INFO: (6) /api/v1/namespaces/proxy-1426/pods/https:proxy-service-glmqw-bbvvk:443/proxy/: ... (200; 3.842593ms) May 14 14:35:40.716: INFO: (6) /api/v1/namespaces/proxy-1426/services/https:proxy-service-glmqw:tlsportname1/proxy/: tls baz (200; 3.80153ms) May 14 14:35:40.716: INFO: (6) /api/v1/namespaces/proxy-1426/pods/https:proxy-service-glmqw-bbvvk:462/proxy/: tls qux (200; 3.86705ms) May 14 14:35:40.716: INFO: (6) /api/v1/namespaces/proxy-1426/pods/proxy-service-glmqw-bbvvk:1080/proxy/: test<... (200; 3.917015ms) May 14 14:35:40.716: INFO: (6) /api/v1/namespaces/proxy-1426/services/proxy-service-glmqw:portname2/proxy/: bar (200; 3.936141ms) May 14 14:35:40.716: INFO: (6) /api/v1/namespaces/proxy-1426/services/http:proxy-service-glmqw:portname2/proxy/: bar (200; 4.003519ms) May 14 14:35:40.716: INFO: (6) /api/v1/namespaces/proxy-1426/pods/proxy-service-glmqw-bbvvk/proxy/: test (200; 4.042527ms) May 14 14:35:40.719: INFO: (7) /api/v1/namespaces/proxy-1426/pods/https:proxy-service-glmqw-bbvvk:443/proxy/: ... (200; 2.52708ms) May 14 14:35:40.720: INFO: (7) /api/v1/namespaces/proxy-1426/pods/proxy-service-glmqw-bbvvk:162/proxy/: bar (200; 3.270554ms) May 14 14:35:40.720: INFO: (7) /api/v1/namespaces/proxy-1426/pods/http:proxy-service-glmqw-bbvvk:160/proxy/: foo (200; 2.849309ms) May 14 14:35:40.720: INFO: (7) /api/v1/namespaces/proxy-1426/pods/http:proxy-service-glmqw-bbvvk:162/proxy/: bar (200; 2.800475ms) May 14 14:35:40.720: INFO: (7) /api/v1/namespaces/proxy-1426/pods/https:proxy-service-glmqw-bbvvk:462/proxy/: tls qux (200; 3.069656ms) May 14 14:35:40.720: INFO: (7) /api/v1/namespaces/proxy-1426/pods/proxy-service-glmqw-bbvvk:160/proxy/: foo (200; 2.943988ms) May 14 14:35:40.720: INFO: (7) /api/v1/namespaces/proxy-1426/pods/proxy-service-glmqw-bbvvk:1080/proxy/: test<... (200; 3.352726ms) May 14 14:35:40.720: INFO: (7) /api/v1/namespaces/proxy-1426/pods/proxy-service-glmqw-bbvvk/proxy/: test (200; 3.421051ms) May 14 14:35:40.721: INFO: (7) /api/v1/namespaces/proxy-1426/services/proxy-service-glmqw:portname1/proxy/: foo (200; 4.580855ms) May 14 14:35:40.722: INFO: (7) /api/v1/namespaces/proxy-1426/services/http:proxy-service-glmqw:portname2/proxy/: bar (200; 4.9855ms) May 14 14:35:40.722: INFO: (7) /api/v1/namespaces/proxy-1426/services/http:proxy-service-glmqw:portname1/proxy/: foo (200; 4.523711ms) May 14 14:35:40.722: INFO: (7) /api/v1/namespaces/proxy-1426/services/https:proxy-service-glmqw:tlsportname2/proxy/: tls qux (200; 4.972539ms) May 14 14:35:40.722: INFO: (7) /api/v1/namespaces/proxy-1426/services/https:proxy-service-glmqw:tlsportname1/proxy/: tls baz (200; 4.747743ms) May 14 14:35:40.722: INFO: (7) /api/v1/namespaces/proxy-1426/pods/https:proxy-service-glmqw-bbvvk:460/proxy/: tls baz (200; 4.736496ms) May 14 14:35:40.722: INFO: (7) /api/v1/namespaces/proxy-1426/services/proxy-service-glmqw:portname2/proxy/: bar (200; 5.010917ms) May 14 14:35:40.725: INFO: (8) /api/v1/namespaces/proxy-1426/pods/proxy-service-glmqw-bbvvk:162/proxy/: bar (200; 2.744831ms) May 14 14:35:40.725: INFO: (8) /api/v1/namespaces/proxy-1426/pods/http:proxy-service-glmqw-bbvvk:160/proxy/: foo (200; 3.017862ms) May 14 14:35:40.725: INFO: (8) /api/v1/namespaces/proxy-1426/pods/https:proxy-service-glmqw-bbvvk:462/proxy/: tls qux (200; 3.07474ms) May 14 14:35:40.725: INFO: (8) /api/v1/namespaces/proxy-1426/pods/https:proxy-service-glmqw-bbvvk:443/proxy/: ... (200; 3.139973ms) May 14 14:35:40.726: INFO: (8) /api/v1/namespaces/proxy-1426/pods/https:proxy-service-glmqw-bbvvk:460/proxy/: tls baz (200; 3.250772ms) May 14 14:35:40.726: INFO: (8) /api/v1/namespaces/proxy-1426/pods/proxy-service-glmqw-bbvvk/proxy/: test (200; 3.290609ms) May 14 14:35:40.726: INFO: (8) /api/v1/namespaces/proxy-1426/pods/proxy-service-glmqw-bbvvk:1080/proxy/: test<... (200; 3.180019ms) May 14 14:35:40.726: INFO: (8) /api/v1/namespaces/proxy-1426/services/https:proxy-service-glmqw:tlsportname2/proxy/: tls qux (200; 4.127441ms) May 14 14:35:40.727: INFO: (8) /api/v1/namespaces/proxy-1426/services/http:proxy-service-glmqw:portname2/proxy/: bar (200; 4.233066ms) May 14 14:35:40.727: INFO: (8) /api/v1/namespaces/proxy-1426/services/https:proxy-service-glmqw:tlsportname1/proxy/: tls baz (200; 4.353839ms) May 14 14:35:40.727: INFO: (8) /api/v1/namespaces/proxy-1426/services/http:proxy-service-glmqw:portname1/proxy/: foo (200; 4.296541ms) May 14 14:35:40.727: INFO: (8) /api/v1/namespaces/proxy-1426/services/proxy-service-glmqw:portname1/proxy/: foo (200; 4.508345ms) May 14 14:35:40.727: INFO: (8) /api/v1/namespaces/proxy-1426/services/proxy-service-glmqw:portname2/proxy/: bar (200; 4.50457ms) May 14 14:35:40.730: INFO: (9) /api/v1/namespaces/proxy-1426/services/proxy-service-glmqw:portname2/proxy/: bar (200; 3.307622ms) May 14 14:35:40.730: INFO: (9) /api/v1/namespaces/proxy-1426/pods/https:proxy-service-glmqw-bbvvk:462/proxy/: tls qux (200; 3.338451ms) May 14 14:35:40.730: INFO: (9) /api/v1/namespaces/proxy-1426/pods/http:proxy-service-glmqw-bbvvk:160/proxy/: foo (200; 3.398509ms) May 14 14:35:40.730: INFO: (9) /api/v1/namespaces/proxy-1426/services/https:proxy-service-glmqw:tlsportname2/proxy/: tls qux (200; 3.4364ms) May 14 14:35:40.730: INFO: (9) /api/v1/namespaces/proxy-1426/pods/proxy-service-glmqw-bbvvk:1080/proxy/: test<... (200; 3.410814ms) May 14 14:35:40.731: INFO: (9) /api/v1/namespaces/proxy-1426/pods/proxy-service-glmqw-bbvvk/proxy/: test (200; 3.869728ms) May 14 14:35:40.731: INFO: (9) /api/v1/namespaces/proxy-1426/pods/https:proxy-service-glmqw-bbvvk:460/proxy/: tls baz (200; 3.920766ms) May 14 14:35:40.731: INFO: (9) /api/v1/namespaces/proxy-1426/pods/http:proxy-service-glmqw-bbvvk:162/proxy/: bar (200; 4.063569ms) May 14 14:35:40.731: INFO: (9) /api/v1/namespaces/proxy-1426/services/http:proxy-service-glmqw:portname2/proxy/: bar (200; 4.152589ms) May 14 14:35:40.731: INFO: (9) /api/v1/namespaces/proxy-1426/services/proxy-service-glmqw:portname1/proxy/: foo (200; 4.246406ms) May 14 14:35:40.731: INFO: (9) /api/v1/namespaces/proxy-1426/pods/proxy-service-glmqw-bbvvk:162/proxy/: bar (200; 4.352923ms) May 14 14:35:40.731: INFO: (9) /api/v1/namespaces/proxy-1426/pods/proxy-service-glmqw-bbvvk:160/proxy/: foo (200; 4.336462ms) May 14 14:35:40.731: INFO: (9) /api/v1/namespaces/proxy-1426/pods/http:proxy-service-glmqw-bbvvk:1080/proxy/: ... (200; 4.358405ms) May 14 14:35:40.731: INFO: (9) /api/v1/namespaces/proxy-1426/services/https:proxy-service-glmqw:tlsportname1/proxy/: tls baz (200; 4.455046ms) May 14 14:35:40.731: INFO: (9) /api/v1/namespaces/proxy-1426/services/http:proxy-service-glmqw:portname1/proxy/: foo (200; 4.556121ms) May 14 14:35:40.732: INFO: (9) /api/v1/namespaces/proxy-1426/pods/https:proxy-service-glmqw-bbvvk:443/proxy/: test (200; 3.548732ms) May 14 14:35:40.735: INFO: (10) /api/v1/namespaces/proxy-1426/pods/http:proxy-service-glmqw-bbvvk:1080/proxy/: ... (200; 3.736268ms) May 14 14:35:40.735: INFO: (10) /api/v1/namespaces/proxy-1426/pods/proxy-service-glmqw-bbvvk:162/proxy/: bar (200; 3.770824ms) May 14 14:35:40.736: INFO: (10) /api/v1/namespaces/proxy-1426/pods/proxy-service-glmqw-bbvvk:1080/proxy/: test<... (200; 3.923782ms) May 14 14:35:40.736: INFO: (10) /api/v1/namespaces/proxy-1426/services/http:proxy-service-glmqw:portname1/proxy/: foo (200; 3.848136ms) May 14 14:35:40.736: INFO: (10) /api/v1/namespaces/proxy-1426/services/http:proxy-service-glmqw:portname2/proxy/: bar (200; 3.961052ms) May 14 14:35:40.736: INFO: (10) /api/v1/namespaces/proxy-1426/services/https:proxy-service-glmqw:tlsportname2/proxy/: tls qux (200; 4.1049ms) May 14 14:35:40.736: INFO: (10) /api/v1/namespaces/proxy-1426/services/proxy-service-glmqw:portname2/proxy/: bar (200; 4.302176ms) May 14 14:35:40.736: INFO: (10) /api/v1/namespaces/proxy-1426/pods/https:proxy-service-glmqw-bbvvk:462/proxy/: tls qux (200; 4.440573ms) May 14 14:35:40.736: INFO: (10) /api/v1/namespaces/proxy-1426/pods/http:proxy-service-glmqw-bbvvk:162/proxy/: bar (200; 4.478447ms) May 14 14:35:40.736: INFO: (10) /api/v1/namespaces/proxy-1426/pods/https:proxy-service-glmqw-bbvvk:460/proxy/: tls baz (200; 4.459451ms) May 14 14:35:40.740: INFO: (11) /api/v1/namespaces/proxy-1426/pods/proxy-service-glmqw-bbvvk:162/proxy/: bar (200; 3.324388ms) May 14 14:35:40.740: INFO: (11) /api/v1/namespaces/proxy-1426/pods/https:proxy-service-glmqw-bbvvk:462/proxy/: tls qux (200; 3.288922ms) May 14 14:35:40.740: INFO: (11) /api/v1/namespaces/proxy-1426/pods/proxy-service-glmqw-bbvvk:1080/proxy/: test<... (200; 3.364257ms) May 14 14:35:40.740: INFO: (11) /api/v1/namespaces/proxy-1426/pods/https:proxy-service-glmqw-bbvvk:460/proxy/: tls baz (200; 3.281612ms) May 14 14:35:40.740: INFO: (11) /api/v1/namespaces/proxy-1426/pods/proxy-service-glmqw-bbvvk:160/proxy/: foo (200; 3.975444ms) May 14 14:35:40.740: INFO: (11) /api/v1/namespaces/proxy-1426/pods/http:proxy-service-glmqw-bbvvk:162/proxy/: bar (200; 4.205619ms) May 14 14:35:40.741: INFO: (11) /api/v1/namespaces/proxy-1426/pods/http:proxy-service-glmqw-bbvvk:160/proxy/: foo (200; 4.54253ms) May 14 14:35:40.741: INFO: (11) /api/v1/namespaces/proxy-1426/pods/proxy-service-glmqw-bbvvk/proxy/: test (200; 4.667024ms) May 14 14:35:40.741: INFO: (11) /api/v1/namespaces/proxy-1426/pods/https:proxy-service-glmqw-bbvvk:443/proxy/: ... (200; 4.939821ms) May 14 14:35:40.741: INFO: (11) /api/v1/namespaces/proxy-1426/services/http:proxy-service-glmqw:portname2/proxy/: bar (200; 5.23653ms) May 14 14:35:40.742: INFO: (11) /api/v1/namespaces/proxy-1426/services/https:proxy-service-glmqw:tlsportname1/proxy/: tls baz (200; 5.394031ms) May 14 14:35:40.742: INFO: (11) /api/v1/namespaces/proxy-1426/services/https:proxy-service-glmqw:tlsportname2/proxy/: tls qux (200; 5.499687ms) May 14 14:35:40.742: INFO: (11) /api/v1/namespaces/proxy-1426/services/proxy-service-glmqw:portname2/proxy/: bar (200; 5.438011ms) May 14 14:35:40.742: INFO: (11) /api/v1/namespaces/proxy-1426/services/http:proxy-service-glmqw:portname1/proxy/: foo (200; 5.431303ms) May 14 14:35:40.742: INFO: (11) /api/v1/namespaces/proxy-1426/services/proxy-service-glmqw:portname1/proxy/: foo (200; 5.676413ms) May 14 14:35:40.745: INFO: (12) /api/v1/namespaces/proxy-1426/pods/proxy-service-glmqw-bbvvk:1080/proxy/: test<... (200; 2.891414ms) May 14 14:35:40.745: INFO: (12) /api/v1/namespaces/proxy-1426/services/proxy-service-glmqw:portname2/proxy/: bar (200; 3.023073ms) May 14 14:35:40.746: INFO: (12) /api/v1/namespaces/proxy-1426/pods/http:proxy-service-glmqw-bbvvk:1080/proxy/: ... (200; 4.285902ms) May 14 14:35:40.746: INFO: (12) /api/v1/namespaces/proxy-1426/pods/http:proxy-service-glmqw-bbvvk:162/proxy/: bar (200; 4.323974ms) May 14 14:35:40.746: INFO: (12) /api/v1/namespaces/proxy-1426/pods/https:proxy-service-glmqw-bbvvk:443/proxy/: test (200; 4.546874ms) May 14 14:35:40.747: INFO: (12) /api/v1/namespaces/proxy-1426/pods/https:proxy-service-glmqw-bbvvk:460/proxy/: tls baz (200; 4.535637ms) May 14 14:35:40.747: INFO: (12) /api/v1/namespaces/proxy-1426/services/http:proxy-service-glmqw:portname2/proxy/: bar (200; 4.591687ms) May 14 14:35:40.747: INFO: (12) /api/v1/namespaces/proxy-1426/services/https:proxy-service-glmqw:tlsportname1/proxy/: tls baz (200; 4.553772ms) May 14 14:35:40.747: INFO: (12) /api/v1/namespaces/proxy-1426/pods/proxy-service-glmqw-bbvvk:160/proxy/: foo (200; 4.57637ms) May 14 14:35:40.747: INFO: (12) /api/v1/namespaces/proxy-1426/pods/http:proxy-service-glmqw-bbvvk:160/proxy/: foo (200; 4.576703ms) May 14 14:35:40.747: INFO: (12) /api/v1/namespaces/proxy-1426/pods/proxy-service-glmqw-bbvvk:162/proxy/: bar (200; 4.74803ms) May 14 14:35:40.747: INFO: (12) /api/v1/namespaces/proxy-1426/pods/https:proxy-service-glmqw-bbvvk:462/proxy/: tls qux (200; 4.925753ms) May 14 14:35:40.747: INFO: (12) /api/v1/namespaces/proxy-1426/services/proxy-service-glmqw:portname1/proxy/: foo (200; 4.944226ms) May 14 14:35:40.747: INFO: (12) /api/v1/namespaces/proxy-1426/services/http:proxy-service-glmqw:portname1/proxy/: foo (200; 4.978713ms) May 14 14:35:40.747: INFO: (12) /api/v1/namespaces/proxy-1426/services/https:proxy-service-glmqw:tlsportname2/proxy/: tls qux (200; 5.20296ms) May 14 14:35:40.749: INFO: (13) /api/v1/namespaces/proxy-1426/pods/proxy-service-glmqw-bbvvk:160/proxy/: foo (200; 1.948642ms) May 14 14:35:40.750: INFO: (13) /api/v1/namespaces/proxy-1426/pods/http:proxy-service-glmqw-bbvvk:160/proxy/: foo (200; 2.542369ms) May 14 14:35:40.750: INFO: (13) /api/v1/namespaces/proxy-1426/pods/https:proxy-service-glmqw-bbvvk:462/proxy/: tls qux (200; 2.628345ms) May 14 14:35:40.750: INFO: (13) /api/v1/namespaces/proxy-1426/pods/https:proxy-service-glmqw-bbvvk:460/proxy/: tls baz (200; 2.671618ms) May 14 14:35:40.750: INFO: (13) /api/v1/namespaces/proxy-1426/pods/http:proxy-service-glmqw-bbvvk:1080/proxy/: ... (200; 2.817165ms) May 14 14:35:40.751: INFO: (13) /api/v1/namespaces/proxy-1426/pods/http:proxy-service-glmqw-bbvvk:162/proxy/: bar (200; 4.087496ms) May 14 14:35:40.751: INFO: (13) /api/v1/namespaces/proxy-1426/pods/proxy-service-glmqw-bbvvk:162/proxy/: bar (200; 4.075261ms) May 14 14:35:40.751: INFO: (13) /api/v1/namespaces/proxy-1426/services/https:proxy-service-glmqw:tlsportname2/proxy/: tls qux (200; 4.145342ms) May 14 14:35:40.751: INFO: (13) /api/v1/namespaces/proxy-1426/services/http:proxy-service-glmqw:portname1/proxy/: foo (200; 4.201016ms) May 14 14:35:40.752: INFO: (13) /api/v1/namespaces/proxy-1426/pods/https:proxy-service-glmqw-bbvvk:443/proxy/: test (200; 4.226146ms) May 14 14:35:40.752: INFO: (13) /api/v1/namespaces/proxy-1426/services/https:proxy-service-glmqw:tlsportname1/proxy/: tls baz (200; 4.273587ms) May 14 14:35:40.752: INFO: (13) /api/v1/namespaces/proxy-1426/pods/proxy-service-glmqw-bbvvk:1080/proxy/: test<... (200; 4.219663ms) May 14 14:35:40.753: INFO: (13) /api/v1/namespaces/proxy-1426/services/http:proxy-service-glmqw:portname2/proxy/: bar (200; 5.201293ms) May 14 14:35:40.753: INFO: (13) /api/v1/namespaces/proxy-1426/services/proxy-service-glmqw:portname2/proxy/: bar (200; 5.416142ms) May 14 14:35:40.753: INFO: (13) /api/v1/namespaces/proxy-1426/services/proxy-service-glmqw:portname1/proxy/: foo (200; 5.456182ms) May 14 14:35:40.757: INFO: (14) /api/v1/namespaces/proxy-1426/pods/https:proxy-service-glmqw-bbvvk:460/proxy/: tls baz (200; 4.187053ms) May 14 14:35:40.757: INFO: (14) /api/v1/namespaces/proxy-1426/pods/http:proxy-service-glmqw-bbvvk:1080/proxy/: ... (200; 4.318248ms) May 14 14:35:40.757: INFO: (14) /api/v1/namespaces/proxy-1426/pods/proxy-service-glmqw-bbvvk:162/proxy/: bar (200; 4.313772ms) May 14 14:35:40.757: INFO: (14) /api/v1/namespaces/proxy-1426/pods/http:proxy-service-glmqw-bbvvk:160/proxy/: foo (200; 4.310283ms) May 14 14:35:40.757: INFO: (14) /api/v1/namespaces/proxy-1426/pods/proxy-service-glmqw-bbvvk:160/proxy/: foo (200; 4.326157ms) May 14 14:35:40.757: INFO: (14) /api/v1/namespaces/proxy-1426/pods/https:proxy-service-glmqw-bbvvk:443/proxy/: test (200; 4.664616ms) May 14 14:35:40.757: INFO: (14) /api/v1/namespaces/proxy-1426/pods/proxy-service-glmqw-bbvvk:1080/proxy/: test<... (200; 4.625857ms) May 14 14:35:40.758: INFO: (14) /api/v1/namespaces/proxy-1426/pods/http:proxy-service-glmqw-bbvvk:162/proxy/: bar (200; 4.887843ms) May 14 14:35:40.758: INFO: (14) /api/v1/namespaces/proxy-1426/services/proxy-service-glmqw:portname1/proxy/: foo (200; 5.379554ms) May 14 14:35:40.758: INFO: (14) /api/v1/namespaces/proxy-1426/services/http:proxy-service-glmqw:portname2/proxy/: bar (200; 5.422848ms) May 14 14:35:40.758: INFO: (14) /api/v1/namespaces/proxy-1426/services/http:proxy-service-glmqw:portname1/proxy/: foo (200; 5.395118ms) May 14 14:35:40.758: INFO: (14) /api/v1/namespaces/proxy-1426/services/https:proxy-service-glmqw:tlsportname2/proxy/: tls qux (200; 5.542ms) May 14 14:35:40.758: INFO: (14) /api/v1/namespaces/proxy-1426/services/https:proxy-service-glmqw:tlsportname1/proxy/: tls baz (200; 5.523989ms) May 14 14:35:40.758: INFO: (14) /api/v1/namespaces/proxy-1426/services/proxy-service-glmqw:portname2/proxy/: bar (200; 5.577765ms) May 14 14:35:40.761: INFO: (15) /api/v1/namespaces/proxy-1426/pods/http:proxy-service-glmqw-bbvvk:162/proxy/: bar (200; 2.579767ms) May 14 14:35:40.761: INFO: (15) /api/v1/namespaces/proxy-1426/pods/https:proxy-service-glmqw-bbvvk:462/proxy/: tls qux (200; 2.503576ms) May 14 14:35:40.761: INFO: (15) /api/v1/namespaces/proxy-1426/pods/https:proxy-service-glmqw-bbvvk:460/proxy/: tls baz (200; 2.836441ms) May 14 14:35:40.762: INFO: (15) /api/v1/namespaces/proxy-1426/pods/proxy-service-glmqw-bbvvk/proxy/: test (200; 3.854755ms) May 14 14:35:40.762: INFO: (15) /api/v1/namespaces/proxy-1426/pods/https:proxy-service-glmqw-bbvvk:443/proxy/: test<... (200; 4.56214ms) May 14 14:35:40.763: INFO: (15) /api/v1/namespaces/proxy-1426/pods/http:proxy-service-glmqw-bbvvk:1080/proxy/: ... (200; 4.549911ms) May 14 14:35:40.763: INFO: (15) /api/v1/namespaces/proxy-1426/services/http:proxy-service-glmqw:portname2/proxy/: bar (200; 4.583089ms) May 14 14:35:40.763: INFO: (15) /api/v1/namespaces/proxy-1426/services/http:proxy-service-glmqw:portname1/proxy/: foo (200; 4.557671ms) May 14 14:35:40.763: INFO: (15) /api/v1/namespaces/proxy-1426/services/proxy-service-glmqw:portname1/proxy/: foo (200; 4.637351ms) May 14 14:35:40.763: INFO: (15) /api/v1/namespaces/proxy-1426/pods/http:proxy-service-glmqw-bbvvk:160/proxy/: foo (200; 4.651236ms) May 14 14:35:40.763: INFO: (15) /api/v1/namespaces/proxy-1426/services/proxy-service-glmqw:portname2/proxy/: bar (200; 4.730043ms) May 14 14:35:40.766: INFO: (16) /api/v1/namespaces/proxy-1426/pods/https:proxy-service-glmqw-bbvvk:460/proxy/: tls baz (200; 2.751243ms) May 14 14:35:40.767: INFO: (16) /api/v1/namespaces/proxy-1426/pods/https:proxy-service-glmqw-bbvvk:443/proxy/: test<... (200; 4.074043ms) May 14 14:35:40.767: INFO: (16) /api/v1/namespaces/proxy-1426/pods/http:proxy-service-glmqw-bbvvk:160/proxy/: foo (200; 4.162704ms) May 14 14:35:40.767: INFO: (16) /api/v1/namespaces/proxy-1426/pods/http:proxy-service-glmqw-bbvvk:162/proxy/: bar (200; 4.117076ms) May 14 14:35:40.767: INFO: (16) /api/v1/namespaces/proxy-1426/services/proxy-service-glmqw:portname2/proxy/: bar (200; 4.182176ms) May 14 14:35:40.767: INFO: (16) /api/v1/namespaces/proxy-1426/pods/proxy-service-glmqw-bbvvk/proxy/: test (200; 4.221572ms) May 14 14:35:40.767: INFO: (16) /api/v1/namespaces/proxy-1426/pods/https:proxy-service-glmqw-bbvvk:462/proxy/: tls qux (200; 4.187223ms) May 14 14:35:40.767: INFO: (16) /api/v1/namespaces/proxy-1426/pods/http:proxy-service-glmqw-bbvvk:1080/proxy/: ... (200; 4.177189ms) May 14 14:35:40.767: INFO: (16) /api/v1/namespaces/proxy-1426/pods/proxy-service-glmqw-bbvvk:160/proxy/: foo (200; 4.169613ms) May 14 14:35:40.767: INFO: (16) /api/v1/namespaces/proxy-1426/pods/proxy-service-glmqw-bbvvk:162/proxy/: bar (200; 4.26066ms) May 14 14:35:40.768: INFO: (16) /api/v1/namespaces/proxy-1426/services/http:proxy-service-glmqw:portname2/proxy/: bar (200; 4.417712ms) May 14 14:35:40.768: INFO: (16) /api/v1/namespaces/proxy-1426/services/https:proxy-service-glmqw:tlsportname1/proxy/: tls baz (200; 4.657306ms) May 14 14:35:40.768: INFO: (16) /api/v1/namespaces/proxy-1426/services/http:proxy-service-glmqw:portname1/proxy/: foo (200; 4.73513ms) May 14 14:35:40.768: INFO: (16) /api/v1/namespaces/proxy-1426/services/proxy-service-glmqw:portname1/proxy/: foo (200; 4.956224ms) May 14 14:35:40.768: INFO: (16) /api/v1/namespaces/proxy-1426/services/https:proxy-service-glmqw:tlsportname2/proxy/: tls qux (200; 4.989644ms) May 14 14:35:40.771: INFO: (17) /api/v1/namespaces/proxy-1426/pods/http:proxy-service-glmqw-bbvvk:1080/proxy/: ... (200; 3.012588ms) May 14 14:35:40.772: INFO: (17) /api/v1/namespaces/proxy-1426/pods/proxy-service-glmqw-bbvvk:162/proxy/: bar (200; 3.479397ms) May 14 14:35:40.772: INFO: (17) /api/v1/namespaces/proxy-1426/pods/http:proxy-service-glmqw-bbvvk:162/proxy/: bar (200; 3.448503ms) May 14 14:35:40.772: INFO: (17) /api/v1/namespaces/proxy-1426/pods/proxy-service-glmqw-bbvvk/proxy/: test (200; 3.482359ms) May 14 14:35:40.772: INFO: (17) /api/v1/namespaces/proxy-1426/pods/https:proxy-service-glmqw-bbvvk:460/proxy/: tls baz (200; 3.520294ms) May 14 14:35:40.772: INFO: (17) /api/v1/namespaces/proxy-1426/pods/proxy-service-glmqw-bbvvk:1080/proxy/: test<... (200; 3.56131ms) May 14 14:35:40.772: INFO: (17) /api/v1/namespaces/proxy-1426/pods/proxy-service-glmqw-bbvvk:160/proxy/: foo (200; 3.487387ms) May 14 14:35:40.772: INFO: (17) /api/v1/namespaces/proxy-1426/pods/https:proxy-service-glmqw-bbvvk:462/proxy/: tls qux (200; 3.49611ms) May 14 14:35:40.772: INFO: (17) /api/v1/namespaces/proxy-1426/pods/https:proxy-service-glmqw-bbvvk:443/proxy/: test<... (200; 2.46756ms) May 14 14:35:40.776: INFO: (18) /api/v1/namespaces/proxy-1426/pods/https:proxy-service-glmqw-bbvvk:462/proxy/: tls qux (200; 2.680617ms) May 14 14:35:40.776: INFO: (18) /api/v1/namespaces/proxy-1426/pods/http:proxy-service-glmqw-bbvvk:160/proxy/: foo (200; 2.894428ms) May 14 14:35:40.776: INFO: (18) /api/v1/namespaces/proxy-1426/pods/proxy-service-glmqw-bbvvk/proxy/: test (200; 2.940125ms) May 14 14:35:40.776: INFO: (18) /api/v1/namespaces/proxy-1426/pods/http:proxy-service-glmqw-bbvvk:1080/proxy/: ... (200; 2.989691ms) May 14 14:35:40.776: INFO: (18) /api/v1/namespaces/proxy-1426/pods/http:proxy-service-glmqw-bbvvk:162/proxy/: bar (200; 2.989779ms) May 14 14:35:40.776: INFO: (18) /api/v1/namespaces/proxy-1426/pods/proxy-service-glmqw-bbvvk:160/proxy/: foo (200; 3.045267ms) May 14 14:35:40.777: INFO: (18) /api/v1/namespaces/proxy-1426/pods/https:proxy-service-glmqw-bbvvk:443/proxy/: test (200; 4.426849ms) May 14 14:35:40.783: INFO: (19) /api/v1/namespaces/proxy-1426/services/proxy-service-glmqw:portname2/proxy/: bar (200; 4.476444ms) May 14 14:35:40.783: INFO: (19) /api/v1/namespaces/proxy-1426/pods/http:proxy-service-glmqw-bbvvk:1080/proxy/: ... (200; 4.45625ms) May 14 14:35:40.783: INFO: (19) /api/v1/namespaces/proxy-1426/pods/https:proxy-service-glmqw-bbvvk:443/proxy/: test<... (200; 4.493259ms) May 14 14:35:40.783: INFO: (19) /api/v1/namespaces/proxy-1426/pods/http:proxy-service-glmqw-bbvvk:162/proxy/: bar (200; 4.532363ms) May 14 14:35:40.783: INFO: (19) /api/v1/namespaces/proxy-1426/services/proxy-service-glmqw:portname1/proxy/: foo (200; 4.829144ms) STEP: deleting ReplicationController proxy-service-glmqw in namespace proxy-1426, will wait for the garbage collector to delete the pods May 14 14:35:40.841: INFO: Deleting ReplicationController proxy-service-glmqw took: 6.396007ms May 14 14:35:41.142: INFO: Terminating ReplicationController proxy-service-glmqw pods took: 300.22353ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:35:52.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-1426" for this suite. May 14 14:35:58.270: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:35:58.349: INFO: namespace proxy-1426 deletion completed in 6.103174686s • [SLOW TEST:29.142 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:35:58.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap that has name configmap-test-emptyKey-4a7d38f0-71bc-4dd7-8b98-6d4cd124b4df [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:35:58.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4469" for this suite. May 14 14:36:04.461: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:36:04.525: INFO: namespace configmap-4469 deletion completed in 6.074085231s • [SLOW TEST:6.176 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:36:04.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-f04fc240-596e-456e-9184-5754246da5e6 STEP: Creating a pod to test consume configMaps May 14 14:36:04.589: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ffce2bff-524c-486f-83ff-5a2b22a01b65" in namespace "projected-2731" to be "success or failure" May 14 14:36:04.611: INFO: Pod "pod-projected-configmaps-ffce2bff-524c-486f-83ff-5a2b22a01b65": Phase="Pending", Reason="", readiness=false. Elapsed: 21.457224ms May 14 14:36:06.615: INFO: Pod "pod-projected-configmaps-ffce2bff-524c-486f-83ff-5a2b22a01b65": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026182128s May 14 14:36:08.619: INFO: Pod "pod-projected-configmaps-ffce2bff-524c-486f-83ff-5a2b22a01b65": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029889072s May 14 14:36:10.623: INFO: Pod "pod-projected-configmaps-ffce2bff-524c-486f-83ff-5a2b22a01b65": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.034298564s STEP: Saw pod success May 14 14:36:10.623: INFO: Pod "pod-projected-configmaps-ffce2bff-524c-486f-83ff-5a2b22a01b65" satisfied condition "success or failure" May 14 14:36:10.626: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-ffce2bff-524c-486f-83ff-5a2b22a01b65 container projected-configmap-volume-test: STEP: delete the pod May 14 14:36:10.650: INFO: Waiting for pod pod-projected-configmaps-ffce2bff-524c-486f-83ff-5a2b22a01b65 to disappear May 14 14:36:10.666: INFO: Pod pod-projected-configmaps-ffce2bff-524c-486f-83ff-5a2b22a01b65 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:36:10.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2731" for this suite. May 14 14:36:16.695: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:36:16.794: INFO: namespace projected-2731 deletion completed in 6.124310233s • [SLOW TEST:12.268 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:36:16.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 14 14:36:17.095: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9c6a74f2-782f-4007-845a-22fed272270c" in namespace "downward-api-5191" to be "success or failure" May 14 14:36:17.099: INFO: Pod "downwardapi-volume-9c6a74f2-782f-4007-845a-22fed272270c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.522917ms May 14 14:36:19.102: INFO: Pod "downwardapi-volume-9c6a74f2-782f-4007-845a-22fed272270c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006824799s May 14 14:36:21.107: INFO: Pod "downwardapi-volume-9c6a74f2-782f-4007-845a-22fed272270c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011416014s STEP: Saw pod success May 14 14:36:21.107: INFO: Pod "downwardapi-volume-9c6a74f2-782f-4007-845a-22fed272270c" satisfied condition "success or failure" May 14 14:36:21.110: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-9c6a74f2-782f-4007-845a-22fed272270c container client-container: STEP: delete the pod May 14 14:36:21.181: INFO: Waiting for pod downwardapi-volume-9c6a74f2-782f-4007-845a-22fed272270c to disappear May 14 14:36:21.305: INFO: Pod downwardapi-volume-9c6a74f2-782f-4007-845a-22fed272270c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:36:21.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5191" for this suite. May 14 14:36:27.337: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:36:27.407: INFO: namespace downward-api-5191 deletion completed in 6.097419107s • [SLOW TEST:10.612 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 14 14:36:27.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 14 14:36:27.565: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 14 14:36:27.597: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 14:36:27.622: INFO: Number of nodes with available pods: 0 May 14 14:36:27.622: INFO: Node iruya-worker is running more than one daemon pod May 14 14:36:28.628: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 14:36:28.632: INFO: Number of nodes with available pods: 0 May 14 14:36:28.632: INFO: Node iruya-worker is running more than one daemon pod May 14 14:36:29.767: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 14:36:29.771: INFO: Number of nodes with available pods: 0 May 14 14:36:29.771: INFO: Node iruya-worker is running more than one daemon pod May 14 14:36:30.652: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 14:36:30.675: INFO: Number of nodes with available pods: 0 May 14 14:36:30.675: INFO: Node iruya-worker is running more than one daemon pod May 14 14:36:31.647: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 14:36:31.650: INFO: Number of nodes with available pods: 0 May 14 14:36:31.650: INFO: Node iruya-worker is running more than one daemon pod May 14 14:36:32.627: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 14:36:32.630: INFO: Number of nodes with available pods: 1 May 14 14:36:32.630: INFO: Node iruya-worker is running more than one daemon pod May 14 14:36:33.646: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 14:36:33.648: INFO: Number of nodes with available pods: 2 May 14 14:36:33.648: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 14 14:36:33.693: INFO: Wrong image for pod: daemon-set-hlspw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 14 14:36:33.693: INFO: Wrong image for pod: daemon-set-ww2rr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 14 14:36:33.832: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 14:36:34.836: INFO: Wrong image for pod: daemon-set-hlspw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 14 14:36:34.836: INFO: Wrong image for pod: daemon-set-ww2rr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 14 14:36:34.838: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 14:36:35.838: INFO: Wrong image for pod: daemon-set-hlspw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 14 14:36:35.838: INFO: Wrong image for pod: daemon-set-ww2rr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 14 14:36:35.843: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 14:36:36.837: INFO: Wrong image for pod: daemon-set-hlspw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 14 14:36:36.837: INFO: Wrong image for pod: daemon-set-ww2rr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 14 14:36:36.840: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 14:36:37.838: INFO: Wrong image for pod: daemon-set-hlspw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 14 14:36:37.838: INFO: Wrong image for pod: daemon-set-ww2rr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 14 14:36:37.838: INFO: Pod daemon-set-ww2rr is not available May 14 14:36:37.845: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 14:36:38.838: INFO: Wrong image for pod: daemon-set-hlspw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 14 14:36:38.838: INFO: Wrong image for pod: daemon-set-ww2rr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 14 14:36:38.838: INFO: Pod daemon-set-ww2rr is not available May 14 14:36:38.842: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 14:36:39.856: INFO: Wrong image for pod: daemon-set-hlspw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 14 14:36:39.856: INFO: Wrong image for pod: daemon-set-ww2rr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 14 14:36:39.856: INFO: Pod daemon-set-ww2rr is not available May 14 14:36:39.859: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 14:36:40.837: INFO: Wrong image for pod: daemon-set-hlspw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 14 14:36:40.837: INFO: Wrong image for pod: daemon-set-ww2rr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 14 14:36:40.837: INFO: Pod daemon-set-ww2rr is not available May 14 14:36:40.841: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 14:36:41.837: INFO: Wrong image for pod: daemon-set-hlspw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 14 14:36:41.837: INFO: Wrong image for pod: daemon-set-ww2rr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 14 14:36:41.837: INFO: Pod daemon-set-ww2rr is not available May 14 14:36:41.841: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 14:36:42.842: INFO: Pod daemon-set-9jlf6 is not available May 14 14:36:42.842: INFO: Wrong image for pod: daemon-set-hlspw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 14 14:36:42.845: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 14:36:43.837: INFO: Pod daemon-set-9jlf6 is not available May 14 14:36:43.837: INFO: Wrong image for pod: daemon-set-hlspw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 14 14:36:43.841: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 14:36:44.836: INFO: Pod daemon-set-9jlf6 is not available May 14 14:36:44.836: INFO: Wrong image for pod: daemon-set-hlspw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 14 14:36:44.839: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 14:36:45.837: INFO: Pod daemon-set-9jlf6 is not available May 14 14:36:45.837: INFO: Wrong image for pod: daemon-set-hlspw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 14 14:36:45.840: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 14:36:46.838: INFO: Wrong image for pod: daemon-set-hlspw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 14 14:36:46.842: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 14:36:47.838: INFO: Wrong image for pod: daemon-set-hlspw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 14 14:36:47.838: INFO: Pod daemon-set-hlspw is not available May 14 14:36:47.842: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 14:36:48.837: INFO: Wrong image for pod: daemon-set-hlspw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 14 14:36:48.837: INFO: Pod daemon-set-hlspw is not available May 14 14:36:48.841: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 14:36:49.837: INFO: Wrong image for pod: daemon-set-hlspw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 14 14:36:49.837: INFO: Pod daemon-set-hlspw is not available May 14 14:36:49.841: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 14:36:50.837: INFO: Wrong image for pod: daemon-set-hlspw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 14 14:36:50.837: INFO: Pod daemon-set-hlspw is not available May 14 14:36:50.841: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 14:36:51.837: INFO: Wrong image for pod: daemon-set-hlspw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 14 14:36:51.838: INFO: Pod daemon-set-hlspw is not available May 14 14:36:51.841: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 14:36:52.838: INFO: Pod daemon-set-585bt is not available May 14 14:36:52.844: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 14 14:36:52.847: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 14:36:52.849: INFO: Number of nodes with available pods: 1 May 14 14:36:52.849: INFO: Node iruya-worker2 is running more than one daemon pod May 14 14:36:53.853: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 14:36:53.856: INFO: Number of nodes with available pods: 1 May 14 14:36:53.856: INFO: Node iruya-worker2 is running more than one daemon pod May 14 14:36:54.853: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 14:36:54.855: INFO: Number of nodes with available pods: 1 May 14 14:36:54.855: INFO: Node iruya-worker2 is running more than one daemon pod May 14 14:36:55.887: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 14:36:55.891: INFO: Number of nodes with available pods: 2 May 14 14:36:55.891: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1961, will wait for the garbage collector to delete the pods May 14 14:36:55.963: INFO: Deleting DaemonSet.extensions daemon-set took: 6.239921ms May 14 14:36:56.363: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.265789ms May 14 14:37:12.695: INFO: Number of nodes with available pods: 0 May 14 14:37:12.695: INFO: Number of running nodes: 0, number of available pods: 0 May 14 14:37:12.748: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1961/daemonsets","resourceVersion":"10873804"},"items":null} May 14 14:37:12.784: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1961/pods","resourceVersion":"10873805"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 14 14:37:12.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1961" for this suite. May 14 14:37:18.949: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 14:37:19.028: INFO: namespace daemonsets-1961 deletion completed in 6.174746046s • [SLOW TEST:51.622 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSMay 14 14:37:19.029: INFO: Running AfterSuite actions on all nodes May 14 14:37:19.029: INFO: Running AfterSuite actions on node 1 May 14 14:37:19.029: INFO: Skipping dumping logs from cluster Ran 215 of 4412 Specs in 6081.473 seconds SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped PASS