I0129 12:56:07.378049 8 e2e.go:243] Starting e2e run "6cfa43eb-0d70-4f3f-b409-bbfece2c1da4" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1580302565 - Will randomize all specs Will run 215 of 4412 specs Jan 29 12:56:07.765: INFO: >>> kubeConfig: /root/.kube/config Jan 29 12:56:07.770: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 29 12:56:07.806: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 29 12:56:07.857: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 29 12:56:07.857: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jan 29 12:56:07.857: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 29 12:56:07.871: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 29 12:56:07.871: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Jan 29 12:56:07.871: INFO: e2e test version: v1.15.7 Jan 29 12:56:07.874: INFO: kube-apiserver version: v1.15.1 SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 29 12:56:07.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl Jan 29 12:56:08.070: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210 STEP: creating the pod Jan 29 12:56:08.074: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2428' Jan 29 12:56:12.677: INFO: stderr: "" Jan 29 12:56:12.677: INFO: stdout: "pod/pause created\n" Jan 29 12:56:12.678: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Jan 29 12:56:12.678: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-2428" to be "running and ready" Jan 29 12:56:12.746: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 68.279492ms Jan 29 12:56:14.770: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092195093s Jan 29 12:56:16.818: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.139741533s Jan 29 12:56:18.829: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.150526613s Jan 29 12:56:20.843: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.165329889s Jan 29 12:56:22.873: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 10.19494661s Jan 29 12:56:22.873: INFO: Pod "pause" satisfied condition "running and ready" Jan 29 12:56:22.873: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: adding the label testing-label with value testing-label-value to a pod Jan 29 12:56:22.874: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-2428' Jan 29 12:56:23.187: INFO: stderr: "" Jan 29 12:56:23.187: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Jan 29 12:56:23.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-2428' Jan 29 12:56:23.299: INFO: stderr: "" Jan 29 12:56:23.299: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 11s testing-label-value\n" STEP: removing the label testing-label of a pod Jan 29 12:56:23.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-2428' Jan 29 12:56:23.450: INFO: stderr: "" Jan 29 12:56:23.450: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Jan 29 12:56:23.450: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-2428' Jan 29 12:56:23.570: INFO: stderr: "" Jan 29 12:56:23.570: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 11s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217 STEP: using delete to clean up resources Jan 29 12:56:23.571: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2428' Jan 29 12:56:23.951: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 29 12:56:23.952: INFO: stdout: "pod \"pause\" force deleted\n" Jan 29 12:56:23.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-2428' Jan 29 12:56:24.253: INFO: stderr: "No resources found.\n" Jan 29 12:56:24.254: INFO: stdout: "" Jan 29 12:56:24.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-2428 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 29 12:56:24.496: INFO: stderr: "" Jan 29 12:56:24.496: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 29 12:56:24.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2428" for this suite. Jan 29 12:56:30.544: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 12:56:30.659: INFO: namespace kubectl-2428 deletion completed in 6.155064737s • [SLOW TEST:22.785 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 29 12:56:30.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 29 12:56:30.805: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5fe9e68a-0c09-49b2-afd7-f43b14b24aca" in namespace "projected-2404" to be "success or failure" Jan 29 12:56:30.898: INFO: Pod "downwardapi-volume-5fe9e68a-0c09-49b2-afd7-f43b14b24aca": Phase="Pending", Reason="", readiness=false. Elapsed: 92.678189ms Jan 29 12:56:32.925: INFO: Pod "downwardapi-volume-5fe9e68a-0c09-49b2-afd7-f43b14b24aca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.119975204s Jan 29 12:56:34.934: INFO: Pod "downwardapi-volume-5fe9e68a-0c09-49b2-afd7-f43b14b24aca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.128068487s Jan 29 12:56:36.947: INFO: Pod "downwardapi-volume-5fe9e68a-0c09-49b2-afd7-f43b14b24aca": Phase="Pending", Reason="", readiness=false. Elapsed: 6.141414129s Jan 29 12:56:38.959: INFO: Pod "downwardapi-volume-5fe9e68a-0c09-49b2-afd7-f43b14b24aca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.153547336s STEP: Saw pod success Jan 29 12:56:38.959: INFO: Pod "downwardapi-volume-5fe9e68a-0c09-49b2-afd7-f43b14b24aca" satisfied condition "success or failure" Jan 29 12:56:38.964: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-5fe9e68a-0c09-49b2-afd7-f43b14b24aca container client-container: STEP: delete the pod Jan 29 12:56:39.074: INFO: Waiting for pod downwardapi-volume-5fe9e68a-0c09-49b2-afd7-f43b14b24aca to disappear Jan 29 12:56:39.082: INFO: Pod downwardapi-volume-5fe9e68a-0c09-49b2-afd7-f43b14b24aca no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 29 12:56:39.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2404" for this suite. Jan 29 12:56:45.117: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 12:56:45.217: INFO: namespace projected-2404 deletion completed in 6.123272044s • [SLOW TEST:14.557 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 29 12:56:45.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token Jan 29 12:56:45.826: INFO: created pod pod-service-account-defaultsa Jan 29 12:56:45.826: INFO: pod pod-service-account-defaultsa service account token volume mount: true Jan 29 12:56:45.848: INFO: created pod pod-service-account-mountsa Jan 29 12:56:45.848: INFO: pod pod-service-account-mountsa service account token volume mount: true Jan 29 12:56:45.886: INFO: created pod pod-service-account-nomountsa Jan 29 12:56:45.886: INFO: pod pod-service-account-nomountsa service account token volume mount: false Jan 29 12:56:45.965: INFO: created pod pod-service-account-defaultsa-mountspec Jan 29 12:56:45.965: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Jan 29 12:56:45.979: INFO: created pod pod-service-account-mountsa-mountspec Jan 29 12:56:45.979: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Jan 29 12:56:46.006: INFO: created pod pod-service-account-nomountsa-mountspec Jan 29 12:56:46.006: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Jan 29 12:56:46.950: INFO: created pod pod-service-account-defaultsa-nomountspec Jan 29 12:56:46.950: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Jan 29 12:56:46.987: INFO: created pod pod-service-account-mountsa-nomountspec Jan 29 12:56:46.987: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Jan 29 12:56:47.529: INFO: created pod pod-service-account-nomountsa-nomountspec Jan 29 12:56:47.529: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 29 12:56:47.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-9437" for this suite. Jan 29 12:57:20.724: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 12:57:20.864: INFO: namespace svcaccounts-9437 deletion completed in 33.287396744s • [SLOW TEST:35.646 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 29 12:57:20.865: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Jan 29 12:57:21.056: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2407' Jan 29 12:57:21.522: INFO: stderr: "" Jan 29 12:57:21.522: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 29 12:57:21.522: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2407' Jan 29 12:57:21.653: INFO: stderr: "" Jan 29 12:57:21.653: INFO: stdout: "update-demo-nautilus-hhsrf update-demo-nautilus-prfbh " Jan 29 12:57:21.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hhsrf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2407' Jan 29 12:57:21.834: INFO: stderr: "" Jan 29 12:57:21.834: INFO: stdout: "" Jan 29 12:57:21.834: INFO: update-demo-nautilus-hhsrf is created but not running Jan 29 12:57:26.835: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2407' Jan 29 12:57:27.211: INFO: stderr: "" Jan 29 12:57:27.212: INFO: stdout: "update-demo-nautilus-hhsrf update-demo-nautilus-prfbh " Jan 29 12:57:27.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hhsrf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2407' Jan 29 12:57:27.376: INFO: stderr: "" Jan 29 12:57:27.376: INFO: stdout: "" Jan 29 12:57:27.376: INFO: update-demo-nautilus-hhsrf is created but not running Jan 29 12:57:32.377: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2407' Jan 29 12:57:32.742: INFO: stderr: "" Jan 29 12:57:32.742: INFO: stdout: "update-demo-nautilus-hhsrf update-demo-nautilus-prfbh " Jan 29 12:57:32.743: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hhsrf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2407' Jan 29 12:57:32.946: INFO: stderr: "" Jan 29 12:57:32.946: INFO: stdout: "" Jan 29 12:57:32.946: INFO: update-demo-nautilus-hhsrf is created but not running Jan 29 12:57:37.946: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2407' Jan 29 12:57:38.188: INFO: stderr: "" Jan 29 12:57:38.188: INFO: stdout: "update-demo-nautilus-hhsrf update-demo-nautilus-prfbh " Jan 29 12:57:38.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hhsrf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2407' Jan 29 12:57:38.304: INFO: stderr: "" Jan 29 12:57:38.304: INFO: stdout: "true" Jan 29 12:57:38.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hhsrf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2407' Jan 29 12:57:38.469: INFO: stderr: "" Jan 29 12:57:38.469: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 29 12:57:38.469: INFO: validating pod update-demo-nautilus-hhsrf Jan 29 12:57:38.500: INFO: got data: { "image": "nautilus.jpg" } Jan 29 12:57:38.501: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 29 12:57:38.501: INFO: update-demo-nautilus-hhsrf is verified up and running Jan 29 12:57:38.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-prfbh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2407' Jan 29 12:57:38.661: INFO: stderr: "" Jan 29 12:57:38.661: INFO: stdout: "true" Jan 29 12:57:38.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-prfbh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2407' Jan 29 12:57:38.855: INFO: stderr: "" Jan 29 12:57:38.855: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 29 12:57:38.855: INFO: validating pod update-demo-nautilus-prfbh Jan 29 12:57:38.903: INFO: got data: { "image": "nautilus.jpg" } Jan 29 12:57:38.903: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 29 12:57:38.903: INFO: update-demo-nautilus-prfbh is verified up and running STEP: using delete to clean up resources Jan 29 12:57:38.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2407' Jan 29 12:57:39.110: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 29 12:57:39.110: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jan 29 12:57:39.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2407' Jan 29 12:57:39.266: INFO: stderr: "No resources found.\n" Jan 29 12:57:39.266: INFO: stdout: "" Jan 29 12:57:39.266: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2407 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 29 12:57:39.525: INFO: stderr: "" Jan 29 12:57:39.525: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 29 12:57:39.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2407" for this suite. Jan 29 12:58:01.595: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 12:58:01.736: INFO: namespace kubectl-2407 deletion completed in 22.178935266s • [SLOW TEST:40.871 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 29 12:58:01.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 29 12:58:01.928: INFO: Create a RollingUpdate DaemonSet Jan 29 12:58:01.933: INFO: Check that daemon pods launch on every node of the cluster Jan 29 12:58:01.972: INFO: Number of nodes with available pods: 0 Jan 29 12:58:01.972: INFO: Node iruya-node is running more than one daemon pod Jan 29 12:58:02.994: INFO: Number of nodes with available pods: 0 Jan 29 12:58:02.994: INFO: Node iruya-node is running more than one daemon pod Jan 29 12:58:04.547: INFO: Number of nodes with available pods: 0 Jan 29 12:58:04.547: INFO: Node iruya-node is running more than one daemon pod Jan 29 12:58:04.981: INFO: Number of nodes with available pods: 0 Jan 29 12:58:04.981: INFO: Node iruya-node is running more than one daemon pod Jan 29 12:58:05.990: INFO: Number of nodes with available pods: 0 Jan 29 12:58:05.990: INFO: Node iruya-node is running more than one daemon pod Jan 29 12:58:07.003: INFO: Number of nodes with available pods: 0 Jan 29 12:58:07.003: INFO: Node iruya-node is running more than one daemon pod Jan 29 12:58:09.328: INFO: Number of nodes with available pods: 0 Jan 29 12:58:09.328: INFO: Node iruya-node is running more than one daemon pod Jan 29 12:58:09.986: INFO: Number of nodes with available pods: 0 Jan 29 12:58:09.986: INFO: Node iruya-node is running more than one daemon pod Jan 29 12:58:10.998: INFO: Number of nodes with available pods: 0 Jan 29 12:58:10.998: INFO: Node iruya-node is running more than one daemon pod Jan 29 12:58:11.996: INFO: Number of nodes with available pods: 0 Jan 29 12:58:11.996: INFO: Node iruya-node is running more than one daemon pod Jan 29 12:58:12.981: INFO: Number of nodes with available pods: 1 Jan 29 12:58:12.981: INFO: Node iruya-node is running more than one daemon pod Jan 29 12:58:14.020: INFO: Number of nodes with available pods: 1 Jan 29 12:58:14.020: INFO: Node iruya-node is running more than one daemon pod Jan 29 12:58:14.986: INFO: Number of nodes with available pods: 2 Jan 29 12:58:14.986: INFO: Number of running nodes: 2, number of available pods: 2 Jan 29 12:58:14.986: INFO: Update the DaemonSet to trigger a rollout Jan 29 12:58:14.996: INFO: Updating DaemonSet daemon-set Jan 29 12:58:21.040: INFO: Roll back the DaemonSet before rollout is complete Jan 29 12:58:21.059: INFO: Updating DaemonSet daemon-set Jan 29 12:58:21.060: INFO: Make sure DaemonSet rollback is complete Jan 29 12:58:21.452: INFO: Wrong image for pod: daemon-set-5zpfx. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Jan 29 12:58:21.453: INFO: Pod daemon-set-5zpfx is not available Jan 29 12:58:22.605: INFO: Wrong image for pod: daemon-set-5zpfx. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Jan 29 12:58:22.606: INFO: Pod daemon-set-5zpfx is not available Jan 29 12:58:23.491: INFO: Wrong image for pod: daemon-set-5zpfx. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Jan 29 12:58:23.491: INFO: Pod daemon-set-5zpfx is not available Jan 29 12:58:24.495: INFO: Wrong image for pod: daemon-set-5zpfx. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Jan 29 12:58:24.495: INFO: Pod daemon-set-5zpfx is not available Jan 29 12:58:25.488: INFO: Pod daemon-set-4lghk is not available [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-386, will wait for the garbage collector to delete the pods Jan 29 12:58:25.567: INFO: Deleting DaemonSet.extensions daemon-set took: 13.95834ms Jan 29 12:58:26.668: INFO: Terminating DaemonSet.extensions daemon-set pods took: 1.10092417s Jan 29 12:58:36.587: INFO: Number of nodes with available pods: 0 Jan 29 12:58:36.588: INFO: Number of running nodes: 0, number of available pods: 0 Jan 29 12:58:36.597: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-386/daemonsets","resourceVersion":"22309655"},"items":null} Jan 29 12:58:36.602: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-386/pods","resourceVersion":"22309655"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 29 12:58:36.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-386" for this suite. Jan 29 12:58:42.651: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 12:58:42.829: INFO: namespace daemonsets-386 deletion completed in 6.205984445s • [SLOW TEST:41.093 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 29 12:58:42.830: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Jan 29 12:58:43.022: INFO: Waiting up to 5m0s for pod "downward-api-26e2e07b-a634-4bb9-80c3-266b4b3b2920" in namespace "downward-api-6485" to be "success or failure" Jan 29 12:58:43.039: INFO: Pod "downward-api-26e2e07b-a634-4bb9-80c3-266b4b3b2920": Phase="Pending", Reason="", readiness=false. Elapsed: 17.079473ms Jan 29 12:58:45.048: INFO: Pod "downward-api-26e2e07b-a634-4bb9-80c3-266b4b3b2920": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026620423s Jan 29 12:58:47.057: INFO: Pod "downward-api-26e2e07b-a634-4bb9-80c3-266b4b3b2920": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034654927s Jan 29 12:58:49.073: INFO: Pod "downward-api-26e2e07b-a634-4bb9-80c3-266b4b3b2920": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05139079s Jan 29 12:58:51.080: INFO: Pod "downward-api-26e2e07b-a634-4bb9-80c3-266b4b3b2920": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.058598561s STEP: Saw pod success Jan 29 12:58:51.081: INFO: Pod "downward-api-26e2e07b-a634-4bb9-80c3-266b4b3b2920" satisfied condition "success or failure" Jan 29 12:58:51.086: INFO: Trying to get logs from node iruya-node pod downward-api-26e2e07b-a634-4bb9-80c3-266b4b3b2920 container dapi-container: STEP: delete the pod Jan 29 12:58:51.137: INFO: Waiting for pod downward-api-26e2e07b-a634-4bb9-80c3-266b4b3b2920 to disappear Jan 29 12:58:51.156: INFO: Pod downward-api-26e2e07b-a634-4bb9-80c3-266b4b3b2920 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 29 12:58:51.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6485" for this suite. Jan 29 12:58:57.234: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 12:58:57.346: INFO: namespace downward-api-6485 deletion completed in 6.154697637s • [SLOW TEST:14.516 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 29 12:58:57.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override arguments Jan 29 12:58:57.496: INFO: Waiting up to 5m0s for pod "client-containers-89a8d419-98ce-413c-b937-a6d04f7568a4" in namespace "containers-3881" to be "success or failure" Jan 29 12:58:57.560: INFO: Pod "client-containers-89a8d419-98ce-413c-b937-a6d04f7568a4": Phase="Pending", Reason="", readiness=false. Elapsed: 63.439896ms Jan 29 12:58:59.580: INFO: Pod "client-containers-89a8d419-98ce-413c-b937-a6d04f7568a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084025704s Jan 29 12:59:01.625: INFO: Pod "client-containers-89a8d419-98ce-413c-b937-a6d04f7568a4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.129060544s Jan 29 12:59:03.640: INFO: Pod "client-containers-89a8d419-98ce-413c-b937-a6d04f7568a4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.143668052s Jan 29 12:59:05.700: INFO: Pod "client-containers-89a8d419-98ce-413c-b937-a6d04f7568a4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.203938247s Jan 29 12:59:08.300: INFO: Pod "client-containers-89a8d419-98ce-413c-b937-a6d04f7568a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.803340103s STEP: Saw pod success Jan 29 12:59:08.300: INFO: Pod "client-containers-89a8d419-98ce-413c-b937-a6d04f7568a4" satisfied condition "success or failure" Jan 29 12:59:08.314: INFO: Trying to get logs from node iruya-node pod client-containers-89a8d419-98ce-413c-b937-a6d04f7568a4 container test-container: STEP: delete the pod Jan 29 12:59:08.580: INFO: Waiting for pod client-containers-89a8d419-98ce-413c-b937-a6d04f7568a4 to disappear Jan 29 12:59:08.586: INFO: Pod client-containers-89a8d419-98ce-413c-b937-a6d04f7568a4 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 29 12:59:08.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3881" for this suite. Jan 29 12:59:14.634: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 12:59:14.729: INFO: namespace containers-3881 deletion completed in 6.136850291s • [SLOW TEST:17.382 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 29 12:59:14.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test env composition Jan 29 12:59:14.827: INFO: Waiting up to 5m0s for pod "var-expansion-fa8d522c-5baf-40a4-b8e5-a53f3db611c6" in namespace "var-expansion-8456" to be "success or failure" Jan 29 12:59:14.880: INFO: Pod "var-expansion-fa8d522c-5baf-40a4-b8e5-a53f3db611c6": Phase="Pending", Reason="", readiness=false. Elapsed: 53.103101ms Jan 29 12:59:16.892: INFO: Pod "var-expansion-fa8d522c-5baf-40a4-b8e5-a53f3db611c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064528096s Jan 29 12:59:18.904: INFO: Pod "var-expansion-fa8d522c-5baf-40a4-b8e5-a53f3db611c6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076642894s Jan 29 12:59:20.927: INFO: Pod "var-expansion-fa8d522c-5baf-40a4-b8e5-a53f3db611c6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.099911933s Jan 29 12:59:22.933: INFO: Pod "var-expansion-fa8d522c-5baf-40a4-b8e5-a53f3db611c6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.106059502s Jan 29 12:59:24.942: INFO: Pod "var-expansion-fa8d522c-5baf-40a4-b8e5-a53f3db611c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.114379834s STEP: Saw pod success Jan 29 12:59:24.942: INFO: Pod "var-expansion-fa8d522c-5baf-40a4-b8e5-a53f3db611c6" satisfied condition "success or failure" Jan 29 12:59:24.946: INFO: Trying to get logs from node iruya-node pod var-expansion-fa8d522c-5baf-40a4-b8e5-a53f3db611c6 container dapi-container: STEP: delete the pod Jan 29 12:59:25.001: INFO: Waiting for pod var-expansion-fa8d522c-5baf-40a4-b8e5-a53f3db611c6 to disappear Jan 29 12:59:25.011: INFO: Pod var-expansion-fa8d522c-5baf-40a4-b8e5-a53f3db611c6 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 29 12:59:25.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8456" for this suite. Jan 29 12:59:31.045: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 12:59:31.187: INFO: namespace var-expansion-8456 deletion completed in 6.168727919s • [SLOW TEST:16.458 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 29 12:59:31.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 29 12:59:31.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2420" for this suite. Jan 29 12:59:37.331: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 12:59:37.434: INFO: namespace services-2420 deletion completed in 6.132481977s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:6.247 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 29 12:59:37.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-e36548f4-6bd6-4e95-a508-1d231c16774b STEP: Creating a pod to test consume configMaps Jan 29 12:59:37.615: INFO: Waiting up to 5m0s for pod "pod-configmaps-2aa39310-feb7-489e-9d09-19fb5c076d96" in namespace "configmap-5090" to be "success or failure" Jan 29 12:59:37.631: INFO: Pod "pod-configmaps-2aa39310-feb7-489e-9d09-19fb5c076d96": Phase="Pending", Reason="", readiness=false. Elapsed: 16.329282ms Jan 29 12:59:39.758: INFO: Pod "pod-configmaps-2aa39310-feb7-489e-9d09-19fb5c076d96": Phase="Pending", Reason="", readiness=false. Elapsed: 2.143236162s Jan 29 12:59:41.766: INFO: Pod "pod-configmaps-2aa39310-feb7-489e-9d09-19fb5c076d96": Phase="Pending", Reason="", readiness=false. Elapsed: 4.151333267s Jan 29 12:59:43.784: INFO: Pod "pod-configmaps-2aa39310-feb7-489e-9d09-19fb5c076d96": Phase="Pending", Reason="", readiness=false. Elapsed: 6.168719658s Jan 29 12:59:45.803: INFO: Pod "pod-configmaps-2aa39310-feb7-489e-9d09-19fb5c076d96": Phase="Pending", Reason="", readiness=false. Elapsed: 8.187461324s Jan 29 12:59:47.812: INFO: Pod "pod-configmaps-2aa39310-feb7-489e-9d09-19fb5c076d96": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.196500515s STEP: Saw pod success Jan 29 12:59:47.812: INFO: Pod "pod-configmaps-2aa39310-feb7-489e-9d09-19fb5c076d96" satisfied condition "success or failure" Jan 29 12:59:47.816: INFO: Trying to get logs from node iruya-node pod pod-configmaps-2aa39310-feb7-489e-9d09-19fb5c076d96 container configmap-volume-test: STEP: delete the pod Jan 29 12:59:47.968: INFO: Waiting for pod pod-configmaps-2aa39310-feb7-489e-9d09-19fb5c076d96 to disappear Jan 29 12:59:47.979: INFO: Pod pod-configmaps-2aa39310-feb7-489e-9d09-19fb5c076d96 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 29 12:59:47.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5090" for this suite. Jan 29 12:59:54.007: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 12:59:54.109: INFO: namespace configmap-5090 deletion completed in 6.122817016s • [SLOW TEST:16.674 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 29 12:59:54.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-ba485a7f-e4ac-4476-bc87-9f17ed2804ac STEP: Creating a pod to test consume secrets Jan 29 12:59:54.230: INFO: Waiting up to 5m0s for pod "pod-secrets-70ebe453-e656-4ce6-aa2e-b5568032e8c7" in namespace "secrets-2333" to be "success or failure" Jan 29 12:59:54.253: INFO: Pod "pod-secrets-70ebe453-e656-4ce6-aa2e-b5568032e8c7": Phase="Pending", Reason="", readiness=false. Elapsed: 23.0786ms Jan 29 12:59:56.269: INFO: Pod "pod-secrets-70ebe453-e656-4ce6-aa2e-b5568032e8c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038547347s Jan 29 12:59:58.287: INFO: Pod "pod-secrets-70ebe453-e656-4ce6-aa2e-b5568032e8c7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056978819s Jan 29 13:00:00.338: INFO: Pod "pod-secrets-70ebe453-e656-4ce6-aa2e-b5568032e8c7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.107937519s Jan 29 13:00:02.352: INFO: Pod "pod-secrets-70ebe453-e656-4ce6-aa2e-b5568032e8c7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.122138199s Jan 29 13:00:04.370: INFO: Pod "pod-secrets-70ebe453-e656-4ce6-aa2e-b5568032e8c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.139492386s STEP: Saw pod success Jan 29 13:00:04.370: INFO: Pod "pod-secrets-70ebe453-e656-4ce6-aa2e-b5568032e8c7" satisfied condition "success or failure" Jan 29 13:00:04.375: INFO: Trying to get logs from node iruya-node pod pod-secrets-70ebe453-e656-4ce6-aa2e-b5568032e8c7 container secret-volume-test: STEP: delete the pod Jan 29 13:00:04.451: INFO: Waiting for pod pod-secrets-70ebe453-e656-4ce6-aa2e-b5568032e8c7 to disappear Jan 29 13:00:04.455: INFO: Pod pod-secrets-70ebe453-e656-4ce6-aa2e-b5568032e8c7 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 29 13:00:04.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2333" for this suite. Jan 29 13:00:10.830: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 13:00:10.967: INFO: namespace secrets-2333 deletion completed in 6.498997139s • [SLOW TEST:16.857 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 29 13:00:10.968: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 29 13:00:11.101: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-5596' Jan 29 13:00:11.320: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 29 13:00:11.320: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426 Jan 29 13:00:11.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-5596' Jan 29 13:00:11.633: INFO: stderr: "" Jan 29 13:00:11.633: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 29 13:00:11.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5596" for this suite. Jan 29 13:00:17.807: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 13:00:17.969: INFO: namespace kubectl-5596 deletion completed in 6.305911054s • [SLOW TEST:7.002 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 29 13:00:17.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Jan 29 13:00:18.135: INFO: Waiting up to 5m0s for pod "downward-api-b0d83dd2-843e-4a94-b247-dddf158634c1" in namespace "downward-api-318" to be "success or failure" Jan 29 13:00:18.177: INFO: Pod "downward-api-b0d83dd2-843e-4a94-b247-dddf158634c1": Phase="Pending", Reason="", readiness=false. Elapsed: 42.089326ms Jan 29 13:00:20.185: INFO: Pod "downward-api-b0d83dd2-843e-4a94-b247-dddf158634c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049891819s Jan 29 13:00:22.207: INFO: Pod "downward-api-b0d83dd2-843e-4a94-b247-dddf158634c1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072275322s Jan 29 13:00:24.216: INFO: Pod "downward-api-b0d83dd2-843e-4a94-b247-dddf158634c1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.080569605s Jan 29 13:00:26.221: INFO: Pod "downward-api-b0d83dd2-843e-4a94-b247-dddf158634c1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.086402786s Jan 29 13:00:28.296: INFO: Pod "downward-api-b0d83dd2-843e-4a94-b247-dddf158634c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.161006604s STEP: Saw pod success Jan 29 13:00:28.296: INFO: Pod "downward-api-b0d83dd2-843e-4a94-b247-dddf158634c1" satisfied condition "success or failure" Jan 29 13:00:28.307: INFO: Trying to get logs from node iruya-node pod downward-api-b0d83dd2-843e-4a94-b247-dddf158634c1 container dapi-container: STEP: delete the pod Jan 29 13:00:28.422: INFO: Waiting for pod downward-api-b0d83dd2-843e-4a94-b247-dddf158634c1 to disappear Jan 29 13:00:28.443: INFO: Pod downward-api-b0d83dd2-843e-4a94-b247-dddf158634c1 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 29 13:00:28.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-318" for this suite. Jan 29 13:00:34.569: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 13:00:34.684: INFO: namespace downward-api-318 deletion completed in 6.223021495s • [SLOW TEST:16.714 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 29 13:00:34.686: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 29 13:00:43.000: INFO: Waiting up to 5m0s for pod "client-envvars-be2cfdc7-0d2d-4f46-aff4-419ff45664fd" in namespace "pods-5121" to be "success or failure" Jan 29 13:00:43.029: INFO: Pod "client-envvars-be2cfdc7-0d2d-4f46-aff4-419ff45664fd": Phase="Pending", Reason="", readiness=false. Elapsed: 28.414394ms Jan 29 13:00:45.037: INFO: Pod "client-envvars-be2cfdc7-0d2d-4f46-aff4-419ff45664fd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036525593s Jan 29 13:00:47.048: INFO: Pod "client-envvars-be2cfdc7-0d2d-4f46-aff4-419ff45664fd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047600535s Jan 29 13:00:49.079: INFO: Pod "client-envvars-be2cfdc7-0d2d-4f46-aff4-419ff45664fd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.078751161s Jan 29 13:00:51.094: INFO: Pod "client-envvars-be2cfdc7-0d2d-4f46-aff4-419ff45664fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.093707017s STEP: Saw pod success Jan 29 13:00:51.095: INFO: Pod "client-envvars-be2cfdc7-0d2d-4f46-aff4-419ff45664fd" satisfied condition "success or failure" Jan 29 13:00:51.098: INFO: Trying to get logs from node iruya-node pod client-envvars-be2cfdc7-0d2d-4f46-aff4-419ff45664fd container env3cont: STEP: delete the pod Jan 29 13:00:51.165: INFO: Waiting for pod client-envvars-be2cfdc7-0d2d-4f46-aff4-419ff45664fd to disappear Jan 29 13:00:51.172: INFO: Pod client-envvars-be2cfdc7-0d2d-4f46-aff4-419ff45664fd no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 29 13:00:51.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5121" for this suite. Jan 29 13:01:37.278: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 13:01:37.436: INFO: namespace pods-5121 deletion completed in 46.25436351s • [SLOW TEST:62.750 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 29 13:01:37.437: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 29 13:01:37.541: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-125' Jan 29 13:01:37.713: INFO: stderr: "" Jan 29 13:01:37.713: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690 Jan 29 13:01:37.753: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-125' Jan 29 13:01:41.797: INFO: stderr: "" Jan 29 13:01:41.797: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 29 13:01:41.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-125" for this suite. Jan 29 13:01:47.864: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 13:01:47.996: INFO: namespace kubectl-125 deletion completed in 6.176784226s • [SLOW TEST:10.559 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 29 13:01:47.997: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Jan 29 13:01:48.140: INFO: Waiting up to 5m0s for pod "pod-1594b60d-6592-4245-a6c6-a48c4c6a3b7b" in namespace "emptydir-2822" to be "success or failure" Jan 29 13:01:48.149: INFO: Pod "pod-1594b60d-6592-4245-a6c6-a48c4c6a3b7b": Phase="Pending", Reason="", readiness=false. Elapsed: 9.770197ms Jan 29 13:01:50.161: INFO: Pod "pod-1594b60d-6592-4245-a6c6-a48c4c6a3b7b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021528581s Jan 29 13:01:52.179: INFO: Pod "pod-1594b60d-6592-4245-a6c6-a48c4c6a3b7b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038954078s Jan 29 13:01:54.186: INFO: Pod "pod-1594b60d-6592-4245-a6c6-a48c4c6a3b7b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046104692s Jan 29 13:01:56.203: INFO: Pod "pod-1594b60d-6592-4245-a6c6-a48c4c6a3b7b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.063201719s Jan 29 13:01:58.210: INFO: Pod "pod-1594b60d-6592-4245-a6c6-a48c4c6a3b7b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.070825149s STEP: Saw pod success Jan 29 13:01:58.211: INFO: Pod "pod-1594b60d-6592-4245-a6c6-a48c4c6a3b7b" satisfied condition "success or failure" Jan 29 13:01:58.214: INFO: Trying to get logs from node iruya-node pod pod-1594b60d-6592-4245-a6c6-a48c4c6a3b7b container test-container: STEP: delete the pod Jan 29 13:01:58.280: INFO: Waiting for pod pod-1594b60d-6592-4245-a6c6-a48c4c6a3b7b to disappear Jan 29 13:01:58.326: INFO: Pod pod-1594b60d-6592-4245-a6c6-a48c4c6a3b7b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 29 13:01:58.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2822" for this suite. Jan 29 13:02:04.367: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 13:02:04.511: INFO: namespace emptydir-2822 deletion completed in 6.176874244s • [SLOW TEST:16.514 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 29 13:02:04.512: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting the proxy server Jan 29 13:02:05.338: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 29 13:02:05.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5191" for this suite. Jan 29 13:02:11.517: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 13:02:11.711: INFO: namespace kubectl-5191 deletion completed in 6.225552306s • [SLOW TEST:7.200 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 29 13:02:11.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 29 13:02:11.884: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c9f6ea58-c3f9-4f8e-9dc1-b542ef9eab27" in namespace "downward-api-3710" to be "success or failure" Jan 29 13:02:11.976: INFO: Pod "downwardapi-volume-c9f6ea58-c3f9-4f8e-9dc1-b542ef9eab27": Phase="Pending", Reason="", readiness=false. Elapsed: 92.398169ms Jan 29 13:02:13.983: INFO: Pod "downwardapi-volume-c9f6ea58-c3f9-4f8e-9dc1-b542ef9eab27": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099733426s Jan 29 13:02:15.996: INFO: Pod "downwardapi-volume-c9f6ea58-c3f9-4f8e-9dc1-b542ef9eab27": Phase="Pending", Reason="", readiness=false. Elapsed: 4.111986138s Jan 29 13:02:18.007: INFO: Pod "downwardapi-volume-c9f6ea58-c3f9-4f8e-9dc1-b542ef9eab27": Phase="Pending", Reason="", readiness=false. Elapsed: 6.123035592s Jan 29 13:02:20.021: INFO: Pod "downwardapi-volume-c9f6ea58-c3f9-4f8e-9dc1-b542ef9eab27": Phase="Pending", Reason="", readiness=false. Elapsed: 8.13773058s Jan 29 13:02:22.037: INFO: Pod "downwardapi-volume-c9f6ea58-c3f9-4f8e-9dc1-b542ef9eab27": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.15286757s STEP: Saw pod success Jan 29 13:02:22.037: INFO: Pod "downwardapi-volume-c9f6ea58-c3f9-4f8e-9dc1-b542ef9eab27" satisfied condition "success or failure" Jan 29 13:02:22.041: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-c9f6ea58-c3f9-4f8e-9dc1-b542ef9eab27 container client-container: STEP: delete the pod Jan 29 13:02:22.230: INFO: Waiting for pod downwardapi-volume-c9f6ea58-c3f9-4f8e-9dc1-b542ef9eab27 to disappear Jan 29 13:02:22.242: INFO: Pod downwardapi-volume-c9f6ea58-c3f9-4f8e-9dc1-b542ef9eab27 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 29 13:02:22.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3710" for this suite. Jan 29 13:02:28.387: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 13:02:28.471: INFO: namespace downward-api-3710 deletion completed in 6.193706987s • [SLOW TEST:16.758 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 29 13:02:28.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-e902c344-02ea-4997-8ae6-a9c8d619d004 STEP: Creating secret with name s-test-opt-upd-0fdf01ad-55ec-4ef3-9420-fe851b043215 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-e902c344-02ea-4997-8ae6-a9c8d619d004 STEP: Updating secret s-test-opt-upd-0fdf01ad-55ec-4ef3-9420-fe851b043215 STEP: Creating secret with name s-test-opt-create-19e12028-4d9a-4973-8528-a15499bd0774 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 29 13:02:49.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-515" for this suite. Jan 29 13:03:13.093: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 13:03:13.207: INFO: namespace secrets-515 deletion completed in 24.139853129s • [SLOW TEST:44.736 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 29 13:03:13.208: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 29 13:03:13.403: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7310ce84-005b-4deb-bf6b-fadabe3781a4" in namespace "projected-4115" to be "success or failure" Jan 29 13:03:13.440: INFO: Pod "downwardapi-volume-7310ce84-005b-4deb-bf6b-fadabe3781a4": Phase="Pending", Reason="", readiness=false. Elapsed: 37.201954ms Jan 29 13:03:15.455: INFO: Pod "downwardapi-volume-7310ce84-005b-4deb-bf6b-fadabe3781a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05180754s Jan 29 13:03:17.462: INFO: Pod "downwardapi-volume-7310ce84-005b-4deb-bf6b-fadabe3781a4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058603548s Jan 29 13:03:19.469: INFO: Pod "downwardapi-volume-7310ce84-005b-4deb-bf6b-fadabe3781a4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.065635643s Jan 29 13:03:21.489: INFO: Pod "downwardapi-volume-7310ce84-005b-4deb-bf6b-fadabe3781a4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.086430078s Jan 29 13:03:23.510: INFO: Pod "downwardapi-volume-7310ce84-005b-4deb-bf6b-fadabe3781a4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.10737858s Jan 29 13:03:25.523: INFO: Pod "downwardapi-volume-7310ce84-005b-4deb-bf6b-fadabe3781a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.120076254s STEP: Saw pod success Jan 29 13:03:25.523: INFO: Pod "downwardapi-volume-7310ce84-005b-4deb-bf6b-fadabe3781a4" satisfied condition "success or failure" Jan 29 13:03:25.528: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-7310ce84-005b-4deb-bf6b-fadabe3781a4 container client-container: STEP: delete the pod Jan 29 13:03:25.679: INFO: Waiting for pod downwardapi-volume-7310ce84-005b-4deb-bf6b-fadabe3781a4 to disappear Jan 29 13:03:25.686: INFO: Pod downwardapi-volume-7310ce84-005b-4deb-bf6b-fadabe3781a4 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 29 13:03:25.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4115" for this suite. Jan 29 13:03:31.713: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 13:03:31.829: INFO: namespace projected-4115 deletion completed in 6.136624891s • [SLOW TEST:18.622 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 29 13:03:31.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 29 13:03:43.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7935" for this suite. Jan 29 13:04:05.164: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 13:04:05.282: INFO: namespace replication-controller-7935 deletion completed in 22.209071966s • [SLOW TEST:33.451 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 29 13:04:05.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-fa630182-de24-4e3f-8c2d-9d518680d880 STEP: Creating a pod to test consume secrets Jan 29 13:04:05.395: INFO: Waiting up to 5m0s for pod "pod-secrets-828b54d5-f027-4e4c-9c56-c2a2e86ee28b" in namespace "secrets-9737" to be "success or failure" Jan 29 13:04:05.416: INFO: Pod "pod-secrets-828b54d5-f027-4e4c-9c56-c2a2e86ee28b": Phase="Pending", Reason="", readiness=false. Elapsed: 20.31695ms Jan 29 13:04:07.424: INFO: Pod "pod-secrets-828b54d5-f027-4e4c-9c56-c2a2e86ee28b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029187884s Jan 29 13:04:09.432: INFO: Pod "pod-secrets-828b54d5-f027-4e4c-9c56-c2a2e86ee28b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037198262s Jan 29 13:04:11.439: INFO: Pod "pod-secrets-828b54d5-f027-4e4c-9c56-c2a2e86ee28b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044139777s Jan 29 13:04:13.457: INFO: Pod "pod-secrets-828b54d5-f027-4e4c-9c56-c2a2e86ee28b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.061980093s Jan 29 13:04:15.465: INFO: Pod "pod-secrets-828b54d5-f027-4e4c-9c56-c2a2e86ee28b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.069555045s STEP: Saw pod success Jan 29 13:04:15.465: INFO: Pod "pod-secrets-828b54d5-f027-4e4c-9c56-c2a2e86ee28b" satisfied condition "success or failure" Jan 29 13:04:15.469: INFO: Trying to get logs from node iruya-node pod pod-secrets-828b54d5-f027-4e4c-9c56-c2a2e86ee28b container secret-env-test: STEP: delete the pod Jan 29 13:04:15.514: INFO: Waiting for pod pod-secrets-828b54d5-f027-4e4c-9c56-c2a2e86ee28b to disappear Jan 29 13:04:15.518: INFO: Pod pod-secrets-828b54d5-f027-4e4c-9c56-c2a2e86ee28b no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 29 13:04:15.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9737" for this suite. Jan 29 13:04:21.544: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 13:04:21.639: INFO: namespace secrets-9737 deletion completed in 6.113759346s • [SLOW TEST:16.357 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 29 13:04:21.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Jan 29 13:04:21.713: INFO: Waiting up to 5m0s for pod "pod-f397915b-4a5d-4e2c-85e2-5b53e7c732d0" in namespace "emptydir-720" to be "success or failure" Jan 29 13:04:21.727: INFO: Pod "pod-f397915b-4a5d-4e2c-85e2-5b53e7c732d0": Phase="Pending", Reason="", readiness=false. Elapsed: 13.529436ms Jan 29 13:04:23.736: INFO: Pod "pod-f397915b-4a5d-4e2c-85e2-5b53e7c732d0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022625572s Jan 29 13:04:25.743: INFO: Pod "pod-f397915b-4a5d-4e2c-85e2-5b53e7c732d0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02927514s Jan 29 13:04:27.750: INFO: Pod "pod-f397915b-4a5d-4e2c-85e2-5b53e7c732d0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036564647s Jan 29 13:04:29.758: INFO: Pod "pod-f397915b-4a5d-4e2c-85e2-5b53e7c732d0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.044426925s Jan 29 13:04:31.884: INFO: Pod "pod-f397915b-4a5d-4e2c-85e2-5b53e7c732d0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.170155848s STEP: Saw pod success Jan 29 13:04:31.884: INFO: Pod "pod-f397915b-4a5d-4e2c-85e2-5b53e7c732d0" satisfied condition "success or failure" Jan 29 13:04:31.888: INFO: Trying to get logs from node iruya-node pod pod-f397915b-4a5d-4e2c-85e2-5b53e7c732d0 container test-container: STEP: delete the pod Jan 29 13:04:31.980: INFO: Waiting for pod pod-f397915b-4a5d-4e2c-85e2-5b53e7c732d0 to disappear Jan 29 13:04:32.126: INFO: Pod pod-f397915b-4a5d-4e2c-85e2-5b53e7c732d0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 29 13:04:32.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-720" for this suite. Jan 29 13:04:38.220: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 13:04:38.332: INFO: namespace emptydir-720 deletion completed in 6.190367816s • [SLOW TEST:16.693 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 29 13:04:38.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-8412/configmap-test-0c911c47-4d07-446f-80f4-d7b7f0f1ee11 STEP: Creating a pod to test consume configMaps Jan 29 13:04:38.633: INFO: Waiting up to 5m0s for pod "pod-configmaps-f6bb02ef-f0d0-4d4c-b950-7c296b30767d" in namespace "configmap-8412" to be "success or failure" Jan 29 13:04:38.648: INFO: Pod "pod-configmaps-f6bb02ef-f0d0-4d4c-b950-7c296b30767d": Phase="Pending", Reason="", readiness=false. Elapsed: 14.555067ms Jan 29 13:04:40.656: INFO: Pod "pod-configmaps-f6bb02ef-f0d0-4d4c-b950-7c296b30767d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023124273s Jan 29 13:04:42.675: INFO: Pod "pod-configmaps-f6bb02ef-f0d0-4d4c-b950-7c296b30767d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042130876s Jan 29 13:04:44.689: INFO: Pod "pod-configmaps-f6bb02ef-f0d0-4d4c-b950-7c296b30767d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055730695s Jan 29 13:04:46.710: INFO: Pod "pod-configmaps-f6bb02ef-f0d0-4d4c-b950-7c296b30767d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.077215182s Jan 29 13:04:48.716: INFO: Pod "pod-configmaps-f6bb02ef-f0d0-4d4c-b950-7c296b30767d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.083354735s STEP: Saw pod success Jan 29 13:04:48.717: INFO: Pod "pod-configmaps-f6bb02ef-f0d0-4d4c-b950-7c296b30767d" satisfied condition "success or failure" Jan 29 13:04:48.720: INFO: Trying to get logs from node iruya-node pod pod-configmaps-f6bb02ef-f0d0-4d4c-b950-7c296b30767d container env-test: STEP: delete the pod Jan 29 13:04:48.783: INFO: Waiting for pod pod-configmaps-f6bb02ef-f0d0-4d4c-b950-7c296b30767d to disappear Jan 29 13:04:48.885: INFO: Pod pod-configmaps-f6bb02ef-f0d0-4d4c-b950-7c296b30767d no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 29 13:04:48.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8412" for this suite. Jan 29 13:04:54.915: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 13:04:55.020: INFO: namespace configmap-8412 deletion completed in 6.125698213s • [SLOW TEST:16.688 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 29 13:04:55.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-a0a77192-998a-4c0f-99d6-00df14f357b1 STEP: Creating a pod to test consume configMaps Jan 29 13:04:55.179: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a9dcf251-72de-48bb-b297-9ec14bbfada5" in namespace "projected-7364" to be "success or failure" Jan 29 13:04:55.183: INFO: Pod "pod-projected-configmaps-a9dcf251-72de-48bb-b297-9ec14bbfada5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.691306ms Jan 29 13:04:57.192: INFO: Pod "pod-projected-configmaps-a9dcf251-72de-48bb-b297-9ec14bbfada5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013544557s Jan 29 13:04:59.200: INFO: Pod "pod-projected-configmaps-a9dcf251-72de-48bb-b297-9ec14bbfada5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020970775s Jan 29 13:05:01.208: INFO: Pod "pod-projected-configmaps-a9dcf251-72de-48bb-b297-9ec14bbfada5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.029895404s Jan 29 13:05:03.228: INFO: Pod "pod-projected-configmaps-a9dcf251-72de-48bb-b297-9ec14bbfada5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.049489102s Jan 29 13:05:05.234: INFO: Pod "pod-projected-configmaps-a9dcf251-72de-48bb-b297-9ec14bbfada5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.055825358s STEP: Saw pod success Jan 29 13:05:05.235: INFO: Pod "pod-projected-configmaps-a9dcf251-72de-48bb-b297-9ec14bbfada5" satisfied condition "success or failure" Jan 29 13:05:05.237: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-a9dcf251-72de-48bb-b297-9ec14bbfada5 container projected-configmap-volume-test: STEP: delete the pod Jan 29 13:05:05.375: INFO: Waiting for pod pod-projected-configmaps-a9dcf251-72de-48bb-b297-9ec14bbfada5 to disappear Jan 29 13:05:05.412: INFO: Pod pod-projected-configmaps-a9dcf251-72de-48bb-b297-9ec14bbfada5 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 29 13:05:05.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7364" for this suite. Jan 29 13:05:11.447: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 13:05:11.607: INFO: namespace projected-7364 deletion completed in 6.191448478s • [SLOW TEST:16.586 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 29 13:05:11.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating replication controller svc-latency-rc in namespace svc-latency-8049 I0129 13:05:11.904164 8 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-8049, replica count: 1 I0129 13:05:12.955468 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0129 13:05:13.955835 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0129 13:05:14.956435 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0129 13:05:15.957099 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0129 13:05:16.958746 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0129 13:05:17.959579 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0129 13:05:18.960374 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0129 13:05:19.961484 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0129 13:05:20.962080 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 29 13:05:21.131: INFO: Created: latency-svc-7vml4 Jan 29 13:05:21.164: INFO: Got endpoints: latency-svc-7vml4 [101.525528ms] Jan 29 13:05:21.283: INFO: Created: latency-svc-q46nt Jan 29 13:05:21.326: INFO: Created: latency-svc-zmnfn Jan 29 13:05:21.326: INFO: Got endpoints: latency-svc-q46nt [160.877833ms] Jan 29 13:05:21.463: INFO: Got endpoints: latency-svc-zmnfn [298.414453ms] Jan 29 13:05:21.504: INFO: Created: latency-svc-4l8zp Jan 29 13:05:21.535: INFO: Got endpoints: latency-svc-4l8zp [369.412113ms] Jan 29 13:05:21.711: INFO: Created: latency-svc-tp5xs Jan 29 13:05:21.729: INFO: Got endpoints: latency-svc-tp5xs [562.904277ms] Jan 29 13:05:21.933: INFO: Created: latency-svc-lczb4 Jan 29 13:05:21.991: INFO: Got endpoints: latency-svc-lczb4 [825.126326ms] Jan 29 13:05:21.993: INFO: Created: latency-svc-6ftzx Jan 29 13:05:22.017: INFO: Got endpoints: latency-svc-6ftzx [850.868399ms] Jan 29 13:05:22.222: INFO: Created: latency-svc-g5lj2 Jan 29 13:05:22.243: INFO: Got endpoints: latency-svc-g5lj2 [1.077416649s] Jan 29 13:05:22.484: INFO: Created: latency-svc-7lgxx Jan 29 13:05:22.498: INFO: Got endpoints: latency-svc-7lgxx [1.331399074s] Jan 29 13:05:22.682: INFO: Created: latency-svc-994wz Jan 29 13:05:22.688: INFO: Got endpoints: latency-svc-994wz [1.521274785s] Jan 29 13:05:22.775: INFO: Created: latency-svc-bf47j Jan 29 13:05:22.901: INFO: Got endpoints: latency-svc-bf47j [1.734796245s] Jan 29 13:05:23.087: INFO: Created: latency-svc-6m5x8 Jan 29 13:05:23.090: INFO: Got endpoints: latency-svc-6m5x8 [1.923064642s] Jan 29 13:05:23.177: INFO: Created: latency-svc-l6zlh Jan 29 13:05:23.327: INFO: Got endpoints: latency-svc-l6zlh [2.159977772s] Jan 29 13:05:23.344: INFO: Created: latency-svc-5rrhl Jan 29 13:05:23.380: INFO: Got endpoints: latency-svc-5rrhl [2.213711602s] Jan 29 13:05:23.593: INFO: Created: latency-svc-qxq2n Jan 29 13:05:23.616: INFO: Got endpoints: latency-svc-qxq2n [2.449419645s] Jan 29 13:05:23.687: INFO: Created: latency-svc-nzjp2 Jan 29 13:05:23.820: INFO: Got endpoints: latency-svc-nzjp2 [2.65404745s] Jan 29 13:05:23.837: INFO: Created: latency-svc-lzr9g Jan 29 13:05:23.852: INFO: Got endpoints: latency-svc-lzr9g [2.525492896s] Jan 29 13:05:24.059: INFO: Created: latency-svc-xx64h Jan 29 13:05:24.073: INFO: Got endpoints: latency-svc-xx64h [2.609260255s] Jan 29 13:05:24.124: INFO: Created: latency-svc-s6l2k Jan 29 13:05:24.135: INFO: Got endpoints: latency-svc-s6l2k [2.599252822s] Jan 29 13:05:24.238: INFO: Created: latency-svc-6ljxv Jan 29 13:05:24.250: INFO: Got endpoints: latency-svc-6ljxv [2.51969422s] Jan 29 13:05:24.301: INFO: Created: latency-svc-jggrt Jan 29 13:05:24.313: INFO: Got endpoints: latency-svc-jggrt [2.320794121s] Jan 29 13:05:24.475: INFO: Created: latency-svc-xptv7 Jan 29 13:05:24.508: INFO: Got endpoints: latency-svc-xptv7 [2.491529346s] Jan 29 13:05:24.516: INFO: Created: latency-svc-5clbc Jan 29 13:05:24.546: INFO: Got endpoints: latency-svc-5clbc [2.301893268s] Jan 29 13:05:24.690: INFO: Created: latency-svc-s927q Jan 29 13:05:24.770: INFO: Created: latency-svc-6hdjf Jan 29 13:05:24.770: INFO: Got endpoints: latency-svc-s927q [2.27137621s] Jan 29 13:05:24.786: INFO: Got endpoints: latency-svc-6hdjf [2.097884916s] Jan 29 13:05:24.881: INFO: Created: latency-svc-hxtgg Jan 29 13:05:24.905: INFO: Got endpoints: latency-svc-hxtgg [2.003387188s] Jan 29 13:05:24.953: INFO: Created: latency-svc-8b9jb Jan 29 13:05:24.958: INFO: Got endpoints: latency-svc-8b9jb [1.868361953s] Jan 29 13:05:25.059: INFO: Created: latency-svc-vh6bg Jan 29 13:05:25.089: INFO: Got endpoints: latency-svc-vh6bg [1.761782217s] Jan 29 13:05:25.132: INFO: Created: latency-svc-bgzmf Jan 29 13:05:25.132: INFO: Got endpoints: latency-svc-bgzmf [1.751919082s] Jan 29 13:05:25.273: INFO: Created: latency-svc-nzg2c Jan 29 13:05:25.303: INFO: Got endpoints: latency-svc-nzg2c [1.68684282s] Jan 29 13:05:25.336: INFO: Created: latency-svc-s9gkw Jan 29 13:05:25.339: INFO: Got endpoints: latency-svc-s9gkw [1.519263148s] Jan 29 13:05:25.467: INFO: Created: latency-svc-h6np6 Jan 29 13:05:25.494: INFO: Got endpoints: latency-svc-h6np6 [1.641253581s] Jan 29 13:05:25.522: INFO: Created: latency-svc-znp5d Jan 29 13:05:25.530: INFO: Got endpoints: latency-svc-znp5d [1.456739226s] Jan 29 13:05:25.661: INFO: Created: latency-svc-5xtvn Jan 29 13:05:25.700: INFO: Got endpoints: latency-svc-5xtvn [1.565154403s] Jan 29 13:05:25.732: INFO: Created: latency-svc-x5hvt Jan 29 13:05:25.746: INFO: Got endpoints: latency-svc-x5hvt [1.49631808s] Jan 29 13:05:25.886: INFO: Created: latency-svc-6j2hc Jan 29 13:05:25.930: INFO: Got endpoints: latency-svc-6j2hc [1.616835304s] Jan 29 13:05:26.053: INFO: Created: latency-svc-c4s7q Jan 29 13:05:26.068: INFO: Got endpoints: latency-svc-c4s7q [1.559094668s] Jan 29 13:05:26.125: INFO: Created: latency-svc-sjlsl Jan 29 13:05:26.131: INFO: Got endpoints: latency-svc-sjlsl [1.585302608s] Jan 29 13:05:26.303: INFO: Created: latency-svc-swxtp Jan 29 13:05:26.334: INFO: Got endpoints: latency-svc-swxtp [1.563502657s] Jan 29 13:05:26.486: INFO: Created: latency-svc-dxgh9 Jan 29 13:05:26.499: INFO: Got endpoints: latency-svc-dxgh9 [1.713065565s] Jan 29 13:05:26.691: INFO: Created: latency-svc-q7jgb Jan 29 13:05:26.718: INFO: Got endpoints: latency-svc-q7jgb [384.002736ms] Jan 29 13:05:26.756: INFO: Created: latency-svc-b4lzw Jan 29 13:05:26.884: INFO: Got endpoints: latency-svc-b4lzw [1.978717713s] Jan 29 13:05:26.920: INFO: Created: latency-svc-wmtw2 Jan 29 13:05:26.938: INFO: Got endpoints: latency-svc-wmtw2 [1.97965029s] Jan 29 13:05:26.979: INFO: Created: latency-svc-6nvcz Jan 29 13:05:27.092: INFO: Got endpoints: latency-svc-6nvcz [2.002414904s] Jan 29 13:05:27.148: INFO: Created: latency-svc-8dqxp Jan 29 13:05:27.305: INFO: Got endpoints: latency-svc-8dqxp [2.172555978s] Jan 29 13:05:27.378: INFO: Created: latency-svc-8mztz Jan 29 13:05:27.398: INFO: Got endpoints: latency-svc-8mztz [2.094886635s] Jan 29 13:05:27.509: INFO: Created: latency-svc-j95qk Jan 29 13:05:27.516: INFO: Got endpoints: latency-svc-j95qk [2.177002403s] Jan 29 13:05:27.584: INFO: Created: latency-svc-fc2k2 Jan 29 13:05:27.702: INFO: Got endpoints: latency-svc-fc2k2 [2.207901174s] Jan 29 13:05:27.764: INFO: Created: latency-svc-vk7sv Jan 29 13:05:27.764: INFO: Got endpoints: latency-svc-vk7sv [2.233768935s] Jan 29 13:05:27.964: INFO: Created: latency-svc-jvhcv Jan 29 13:05:27.968: INFO: Got endpoints: latency-svc-jvhcv [2.267966897s] Jan 29 13:05:28.882: INFO: Created: latency-svc-84nkg Jan 29 13:05:28.905: INFO: Got endpoints: latency-svc-84nkg [3.158461898s] Jan 29 13:05:29.148: INFO: Created: latency-svc-wf7dm Jan 29 13:05:29.167: INFO: Got endpoints: latency-svc-wf7dm [3.237139902s] Jan 29 13:05:29.229: INFO: Created: latency-svc-tw7c8 Jan 29 13:05:29.367: INFO: Got endpoints: latency-svc-tw7c8 [3.298338313s] Jan 29 13:05:29.456: INFO: Created: latency-svc-wnt2q Jan 29 13:05:29.588: INFO: Got endpoints: latency-svc-wnt2q [3.456523032s] Jan 29 13:05:29.653: INFO: Created: latency-svc-sqk8z Jan 29 13:05:29.662: INFO: Got endpoints: latency-svc-sqk8z [3.162096154s] Jan 29 13:05:29.835: INFO: Created: latency-svc-s5m8p Jan 29 13:05:29.868: INFO: Got endpoints: latency-svc-s5m8p [3.14909918s] Jan 29 13:05:30.010: INFO: Created: latency-svc-q2s4t Jan 29 13:05:30.069: INFO: Created: latency-svc-lk4h7 Jan 29 13:05:30.069: INFO: Got endpoints: latency-svc-q2s4t [3.184889113s] Jan 29 13:05:30.081: INFO: Got endpoints: latency-svc-lk4h7 [3.142589754s] Jan 29 13:05:30.219: INFO: Created: latency-svc-njf6n Jan 29 13:05:30.223: INFO: Got endpoints: latency-svc-njf6n [3.130606659s] Jan 29 13:05:30.275: INFO: Created: latency-svc-cshgr Jan 29 13:05:30.344: INFO: Got endpoints: latency-svc-cshgr [3.039257906s] Jan 29 13:05:30.373: INFO: Created: latency-svc-x9lrb Jan 29 13:05:30.403: INFO: Got endpoints: latency-svc-x9lrb [3.004759448s] Jan 29 13:05:30.534: INFO: Created: latency-svc-zd2m5 Jan 29 13:05:30.549: INFO: Got endpoints: latency-svc-zd2m5 [3.0328608s] Jan 29 13:05:30.594: INFO: Created: latency-svc-27zzg Jan 29 13:05:30.619: INFO: Got endpoints: latency-svc-27zzg [2.915964998s] Jan 29 13:05:30.780: INFO: Created: latency-svc-b4d54 Jan 29 13:05:30.816: INFO: Got endpoints: latency-svc-b4d54 [3.051708204s] Jan 29 13:05:30.925: INFO: Created: latency-svc-rs4vq Jan 29 13:05:30.931: INFO: Got endpoints: latency-svc-rs4vq [2.962364697s] Jan 29 13:05:30.971: INFO: Created: latency-svc-bktxh Jan 29 13:05:30.992: INFO: Got endpoints: latency-svc-bktxh [2.086487753s] Jan 29 13:05:31.076: INFO: Created: latency-svc-jp6tj Jan 29 13:05:31.119: INFO: Got endpoints: latency-svc-jp6tj [1.951895673s] Jan 29 13:05:31.131: INFO: Created: latency-svc-xcw5s Jan 29 13:05:31.146: INFO: Got endpoints: latency-svc-xcw5s [1.778959927s] Jan 29 13:05:31.248: INFO: Created: latency-svc-rmn4c Jan 29 13:05:31.278: INFO: Got endpoints: latency-svc-rmn4c [1.689004164s] Jan 29 13:05:31.322: INFO: Created: latency-svc-5rz5k Jan 29 13:05:31.327: INFO: Got endpoints: latency-svc-5rz5k [1.664867377s] Jan 29 13:05:31.459: INFO: Created: latency-svc-qmmr8 Jan 29 13:05:31.466: INFO: Got endpoints: latency-svc-qmmr8 [1.598315737s] Jan 29 13:05:31.497: INFO: Created: latency-svc-5t9fm Jan 29 13:05:31.506: INFO: Got endpoints: latency-svc-5t9fm [1.436179914s] Jan 29 13:05:31.538: INFO: Created: latency-svc-wvcml Jan 29 13:05:31.624: INFO: Got endpoints: latency-svc-wvcml [1.542982262s] Jan 29 13:05:31.649: INFO: Created: latency-svc-chdsj Jan 29 13:05:31.667: INFO: Got endpoints: latency-svc-chdsj [1.444044418s] Jan 29 13:05:31.702: INFO: Created: latency-svc-kjp5p Jan 29 13:05:31.717: INFO: Got endpoints: latency-svc-kjp5p [1.372434513s] Jan 29 13:05:31.874: INFO: Created: latency-svc-s75zv Jan 29 13:05:31.885: INFO: Got endpoints: latency-svc-s75zv [1.481268275s] Jan 29 13:05:31.927: INFO: Created: latency-svc-7425g Jan 29 13:05:31.931: INFO: Got endpoints: latency-svc-7425g [1.381483621s] Jan 29 13:05:32.139: INFO: Created: latency-svc-b6j2h Jan 29 13:05:32.175: INFO: Got endpoints: latency-svc-b6j2h [1.555073166s] Jan 29 13:05:32.300: INFO: Created: latency-svc-2zhrr Jan 29 13:05:32.331: INFO: Got endpoints: latency-svc-2zhrr [1.51409339s] Jan 29 13:05:32.364: INFO: Created: latency-svc-mrhjt Jan 29 13:05:32.390: INFO: Got endpoints: latency-svc-mrhjt [1.458810855s] Jan 29 13:05:32.505: INFO: Created: latency-svc-l5q26 Jan 29 13:05:32.518: INFO: Got endpoints: latency-svc-l5q26 [1.525589604s] Jan 29 13:05:32.589: INFO: Created: latency-svc-ltwcz Jan 29 13:05:32.726: INFO: Got endpoints: latency-svc-ltwcz [1.606082294s] Jan 29 13:05:32.813: INFO: Created: latency-svc-m7f2r Jan 29 13:05:32.966: INFO: Got endpoints: latency-svc-m7f2r [1.819727546s] Jan 29 13:05:32.991: INFO: Created: latency-svc-n2x5j Jan 29 13:05:33.002: INFO: Got endpoints: latency-svc-n2x5j [1.724288891s] Jan 29 13:05:33.046: INFO: Created: latency-svc-dxcl8 Jan 29 13:05:33.055: INFO: Got endpoints: latency-svc-dxcl8 [1.727722987s] Jan 29 13:05:33.233: INFO: Created: latency-svc-trscj Jan 29 13:05:33.241: INFO: Got endpoints: latency-svc-trscj [1.774292369s] Jan 29 13:05:33.303: INFO: Created: latency-svc-t9njh Jan 29 13:05:33.323: INFO: Got endpoints: latency-svc-t9njh [1.816742957s] Jan 29 13:05:33.507: INFO: Created: latency-svc-2b6p9 Jan 29 13:05:33.645: INFO: Got endpoints: latency-svc-2b6p9 [2.01987111s] Jan 29 13:05:33.665: INFO: Created: latency-svc-tsk6j Jan 29 13:05:33.683: INFO: Got endpoints: latency-svc-tsk6j [2.015962247s] Jan 29 13:05:33.892: INFO: Created: latency-svc-vdwn8 Jan 29 13:05:33.905: INFO: Got endpoints: latency-svc-vdwn8 [2.187159899s] Jan 29 13:05:34.243: INFO: Created: latency-svc-j29n9 Jan 29 13:05:34.272: INFO: Got endpoints: latency-svc-j29n9 [2.386620265s] Jan 29 13:05:34.460: INFO: Created: latency-svc-h6pvs Jan 29 13:05:34.478: INFO: Got endpoints: latency-svc-h6pvs [2.546753408s] Jan 29 13:05:34.548: INFO: Created: latency-svc-tb8bk Jan 29 13:05:34.802: INFO: Got endpoints: latency-svc-tb8bk [2.626663851s] Jan 29 13:05:34.814: INFO: Created: latency-svc-7f2t5 Jan 29 13:05:34.840: INFO: Got endpoints: latency-svc-7f2t5 [2.50939325s] Jan 29 13:05:35.031: INFO: Created: latency-svc-4w9n5 Jan 29 13:05:35.056: INFO: Got endpoints: latency-svc-4w9n5 [2.665726229s] Jan 29 13:05:35.102: INFO: Created: latency-svc-895zb Jan 29 13:05:35.126: INFO: Got endpoints: latency-svc-895zb [2.607039706s] Jan 29 13:05:35.293: INFO: Created: latency-svc-s9tvp Jan 29 13:05:35.302: INFO: Got endpoints: latency-svc-s9tvp [2.576171014s] Jan 29 13:05:35.348: INFO: Created: latency-svc-ntw6n Jan 29 13:05:35.354: INFO: Got endpoints: latency-svc-ntw6n [2.387507488s] Jan 29 13:05:35.569: INFO: Created: latency-svc-j4bqv Jan 29 13:05:35.584: INFO: Got endpoints: latency-svc-j4bqv [2.581805984s] Jan 29 13:05:35.658: INFO: Created: latency-svc-lm2t5 Jan 29 13:05:35.659: INFO: Got endpoints: latency-svc-lm2t5 [2.603940515s] Jan 29 13:05:35.833: INFO: Created: latency-svc-wddw2 Jan 29 13:05:35.835: INFO: Got endpoints: latency-svc-wddw2 [2.593973303s] Jan 29 13:05:35.897: INFO: Created: latency-svc-7mxdz Jan 29 13:05:36.088: INFO: Got endpoints: latency-svc-7mxdz [2.764966131s] Jan 29 13:05:36.115: INFO: Created: latency-svc-l4dc7 Jan 29 13:05:36.124: INFO: Got endpoints: latency-svc-l4dc7 [2.479545953s] Jan 29 13:05:36.167: INFO: Created: latency-svc-2ldtv Jan 29 13:05:36.379: INFO: Got endpoints: latency-svc-2ldtv [2.695830833s] Jan 29 13:05:36.403: INFO: Created: latency-svc-v97fc Jan 29 13:05:36.405: INFO: Got endpoints: latency-svc-v97fc [2.500261273s] Jan 29 13:05:36.492: INFO: Created: latency-svc-b58s4 Jan 29 13:05:36.645: INFO: Got endpoints: latency-svc-b58s4 [2.373083458s] Jan 29 13:05:36.660: INFO: Created: latency-svc-cln69 Jan 29 13:05:36.675: INFO: Got endpoints: latency-svc-cln69 [2.196016675s] Jan 29 13:05:36.724: INFO: Created: latency-svc-sn6fw Jan 29 13:05:36.914: INFO: Created: latency-svc-wknh7 Jan 29 13:05:36.915: INFO: Got endpoints: latency-svc-sn6fw [2.111874337s] Jan 29 13:05:36.958: INFO: Got endpoints: latency-svc-wknh7 [2.117192404s] Jan 29 13:05:36.999: INFO: Created: latency-svc-4m4gc Jan 29 13:05:37.126: INFO: Got endpoints: latency-svc-4m4gc [2.069105362s] Jan 29 13:05:37.153: INFO: Created: latency-svc-lptq5 Jan 29 13:05:37.153: INFO: Got endpoints: latency-svc-lptq5 [2.027356128s] Jan 29 13:05:37.196: INFO: Created: latency-svc-hj57j Jan 29 13:05:37.206: INFO: Got endpoints: latency-svc-hj57j [1.903118589s] Jan 29 13:05:37.342: INFO: Created: latency-svc-rqbjx Jan 29 13:05:37.692: INFO: Got endpoints: latency-svc-rqbjx [2.338119102s] Jan 29 13:05:37.697: INFO: Created: latency-svc-mn2sg Jan 29 13:05:37.713: INFO: Got endpoints: latency-svc-mn2sg [2.127672586s] Jan 29 13:05:37.790: INFO: Created: latency-svc-dfxbq Jan 29 13:05:37.959: INFO: Got endpoints: latency-svc-dfxbq [2.299715669s] Jan 29 13:05:37.968: INFO: Created: latency-svc-pmh69 Jan 29 13:05:37.974: INFO: Got endpoints: latency-svc-pmh69 [2.139189249s] Jan 29 13:05:38.197: INFO: Created: latency-svc-5h5xt Jan 29 13:05:38.216: INFO: Got endpoints: latency-svc-5h5xt [2.12674636s] Jan 29 13:05:38.258: INFO: Created: latency-svc-5dc25 Jan 29 13:05:38.274: INFO: Got endpoints: latency-svc-5dc25 [2.149739367s] Jan 29 13:05:38.406: INFO: Created: latency-svc-gjxhz Jan 29 13:05:38.465: INFO: Created: latency-svc-kkb6j Jan 29 13:05:38.465: INFO: Got endpoints: latency-svc-gjxhz [2.085528356s] Jan 29 13:05:38.550: INFO: Got endpoints: latency-svc-kkb6j [2.144437658s] Jan 29 13:05:38.595: INFO: Created: latency-svc-95zjb Jan 29 13:05:38.632: INFO: Got endpoints: latency-svc-95zjb [1.986599884s] Jan 29 13:05:38.791: INFO: Created: latency-svc-jz62r Jan 29 13:05:38.801: INFO: Got endpoints: latency-svc-jz62r [2.125948436s] Jan 29 13:05:38.876: INFO: Created: latency-svc-nxvnl Jan 29 13:05:39.010: INFO: Got endpoints: latency-svc-nxvnl [2.094600347s] Jan 29 13:05:39.023: INFO: Created: latency-svc-xpmqw Jan 29 13:05:39.031: INFO: Got endpoints: latency-svc-xpmqw [2.072818862s] Jan 29 13:05:39.171: INFO: Created: latency-svc-hfqkt Jan 29 13:05:39.184: INFO: Got endpoints: latency-svc-hfqkt [2.057974231s] Jan 29 13:05:39.251: INFO: Created: latency-svc-qgdmg Jan 29 13:05:39.267: INFO: Got endpoints: latency-svc-qgdmg [2.113814452s] Jan 29 13:05:39.427: INFO: Created: latency-svc-sbp5z Jan 29 13:05:39.433: INFO: Got endpoints: latency-svc-sbp5z [2.226575626s] Jan 29 13:05:39.479: INFO: Created: latency-svc-2dqfv Jan 29 13:05:39.497: INFO: Got endpoints: latency-svc-2dqfv [1.804382105s] Jan 29 13:05:39.621: INFO: Created: latency-svc-qpjgb Jan 29 13:05:39.629: INFO: Got endpoints: latency-svc-qpjgb [1.916353165s] Jan 29 13:05:39.671: INFO: Created: latency-svc-lp69b Jan 29 13:05:39.690: INFO: Got endpoints: latency-svc-lp69b [1.730803313s] Jan 29 13:05:39.860: INFO: Created: latency-svc-sfrb2 Jan 29 13:05:40.141: INFO: Got endpoints: latency-svc-sfrb2 [2.166701674s] Jan 29 13:05:40.232: INFO: Created: latency-svc-nt8hx Jan 29 13:05:40.339: INFO: Got endpoints: latency-svc-nt8hx [2.123321099s] Jan 29 13:05:40.371: INFO: Created: latency-svc-ffdx7 Jan 29 13:05:40.383: INFO: Got endpoints: latency-svc-ffdx7 [2.108731147s] Jan 29 13:05:40.419: INFO: Created: latency-svc-k8j8l Jan 29 13:05:40.537: INFO: Got endpoints: latency-svc-k8j8l [2.07185017s] Jan 29 13:05:40.547: INFO: Created: latency-svc-d5nnl Jan 29 13:05:40.548: INFO: Got endpoints: latency-svc-d5nnl [1.998068929s] Jan 29 13:05:40.607: INFO: Created: latency-svc-7jl2p Jan 29 13:05:40.613: INFO: Got endpoints: latency-svc-7jl2p [1.979955855s] Jan 29 13:05:40.740: INFO: Created: latency-svc-g2grw Jan 29 13:05:40.743: INFO: Got endpoints: latency-svc-g2grw [1.941421734s] Jan 29 13:05:40.810: INFO: Created: latency-svc-7fl8t Jan 29 13:05:40.810: INFO: Got endpoints: latency-svc-7fl8t [1.799537241s] Jan 29 13:05:40.900: INFO: Created: latency-svc-h4xx7 Jan 29 13:05:40.907: INFO: Got endpoints: latency-svc-h4xx7 [1.875741586s] Jan 29 13:05:40.947: INFO: Created: latency-svc-fl8hq Jan 29 13:05:40.952: INFO: Got endpoints: latency-svc-fl8hq [1.768233698s] Jan 29 13:05:40.994: INFO: Created: latency-svc-l8zdz Jan 29 13:05:41.070: INFO: Got endpoints: latency-svc-l8zdz [1.802891384s] Jan 29 13:05:41.095: INFO: Created: latency-svc-zl5zr Jan 29 13:05:41.114: INFO: Got endpoints: latency-svc-zl5zr [1.680745634s] Jan 29 13:05:41.152: INFO: Created: latency-svc-tjlnd Jan 29 13:05:41.321: INFO: Got endpoints: latency-svc-tjlnd [1.824186606s] Jan 29 13:05:41.333: INFO: Created: latency-svc-4rrz8 Jan 29 13:05:41.335: INFO: Got endpoints: latency-svc-4rrz8 [1.706303938s] Jan 29 13:05:41.399: INFO: Created: latency-svc-m2m8m Jan 29 13:05:41.413: INFO: Got endpoints: latency-svc-m2m8m [1.722931138s] Jan 29 13:05:41.522: INFO: Created: latency-svc-kcz8q Jan 29 13:05:41.529: INFO: Got endpoints: latency-svc-kcz8q [1.387121085s] Jan 29 13:05:41.572: INFO: Created: latency-svc-q28kw Jan 29 13:05:41.575: INFO: Got endpoints: latency-svc-q28kw [1.234964181s] Jan 29 13:05:41.683: INFO: Created: latency-svc-sbfq9 Jan 29 13:05:41.692: INFO: Got endpoints: latency-svc-sbfq9 [1.308686029s] Jan 29 13:05:41.727: INFO: Created: latency-svc-66ggl Jan 29 13:05:41.732: INFO: Got endpoints: latency-svc-66ggl [1.193725012s] Jan 29 13:05:41.769: INFO: Created: latency-svc-g8d94 Jan 29 13:05:41.850: INFO: Got endpoints: latency-svc-g8d94 [1.302038904s] Jan 29 13:05:41.874: INFO: Created: latency-svc-4tpnd Jan 29 13:05:41.930: INFO: Got endpoints: latency-svc-4tpnd [1.317245095s] Jan 29 13:05:41.950: INFO: Created: latency-svc-wxzmk Jan 29 13:05:42.007: INFO: Got endpoints: latency-svc-wxzmk [1.263935853s] Jan 29 13:05:42.070: INFO: Created: latency-svc-t77ks Jan 29 13:05:42.070: INFO: Got endpoints: latency-svc-t77ks [1.260297082s] Jan 29 13:05:42.107: INFO: Created: latency-svc-64h42 Jan 29 13:05:42.219: INFO: Got endpoints: latency-svc-64h42 [1.311562964s] Jan 29 13:05:42.241: INFO: Created: latency-svc-2gx2h Jan 29 13:05:42.246: INFO: Got endpoints: latency-svc-2gx2h [1.293867343s] Jan 29 13:05:42.299: INFO: Created: latency-svc-44fz5 Jan 29 13:05:42.396: INFO: Got endpoints: latency-svc-44fz5 [1.324964143s] Jan 29 13:05:42.411: INFO: Created: latency-svc-9gpmg Jan 29 13:05:42.411: INFO: Got endpoints: latency-svc-9gpmg [1.296836176s] Jan 29 13:05:42.643: INFO: Created: latency-svc-hxz6l Jan 29 13:05:42.683: INFO: Got endpoints: latency-svc-hxz6l [1.361034031s] Jan 29 13:05:42.690: INFO: Created: latency-svc-rg767 Jan 29 13:05:42.692: INFO: Got endpoints: latency-svc-rg767 [1.356271086s] Jan 29 13:05:42.733: INFO: Created: latency-svc-89zql Jan 29 13:05:42.809: INFO: Got endpoints: latency-svc-89zql [1.396415608s] Jan 29 13:05:42.861: INFO: Created: latency-svc-4ml4w Jan 29 13:05:42.864: INFO: Got endpoints: latency-svc-4ml4w [1.335254772s] Jan 29 13:05:42.989: INFO: Created: latency-svc-jktzr Jan 29 13:05:43.020: INFO: Got endpoints: latency-svc-jktzr [1.445659002s] Jan 29 13:05:43.028: INFO: Created: latency-svc-qf6rn Jan 29 13:05:43.030: INFO: Got endpoints: latency-svc-qf6rn [1.337835528s] Jan 29 13:05:43.068: INFO: Created: latency-svc-mc9wp Jan 29 13:05:43.171: INFO: Got endpoints: latency-svc-mc9wp [1.438573981s] Jan 29 13:05:43.181: INFO: Created: latency-svc-24bfl Jan 29 13:05:43.186: INFO: Got endpoints: latency-svc-24bfl [1.335316078s] Jan 29 13:05:43.240: INFO: Created: latency-svc-42vzs Jan 29 13:05:43.327: INFO: Got endpoints: latency-svc-42vzs [1.39631384s] Jan 29 13:05:43.329: INFO: Created: latency-svc-smhgr Jan 29 13:05:43.339: INFO: Got endpoints: latency-svc-smhgr [1.33201728s] Jan 29 13:05:43.416: INFO: Created: latency-svc-mgdgn Jan 29 13:05:43.516: INFO: Got endpoints: latency-svc-mgdgn [1.445704204s] Jan 29 13:05:43.520: INFO: Created: latency-svc-94hs2 Jan 29 13:05:43.530: INFO: Got endpoints: latency-svc-94hs2 [1.309733567s] Jan 29 13:05:43.597: INFO: Created: latency-svc-k7s28 Jan 29 13:05:43.715: INFO: Got endpoints: latency-svc-k7s28 [1.468148348s] Jan 29 13:05:43.731: INFO: Created: latency-svc-gbc69 Jan 29 13:05:43.741: INFO: Got endpoints: latency-svc-gbc69 [1.344566527s] Jan 29 13:05:43.792: INFO: Created: latency-svc-bjfwv Jan 29 13:05:43.805: INFO: Got endpoints: latency-svc-bjfwv [1.393738464s] Jan 29 13:05:43.910: INFO: Created: latency-svc-fvmw2 Jan 29 13:05:43.914: INFO: Got endpoints: latency-svc-fvmw2 [1.230410126s] Jan 29 13:05:43.976: INFO: Created: latency-svc-pvxlm Jan 29 13:05:43.986: INFO: Got endpoints: latency-svc-pvxlm [1.294128024s] Jan 29 13:05:44.098: INFO: Created: latency-svc-th8m6 Jan 29 13:05:44.101: INFO: Got endpoints: latency-svc-th8m6 [1.29055966s] Jan 29 13:05:44.180: INFO: Created: latency-svc-bzwcz Jan 29 13:05:44.231: INFO: Got endpoints: latency-svc-bzwcz [1.366698148s] Jan 29 13:05:44.276: INFO: Created: latency-svc-nsht6 Jan 29 13:05:44.278: INFO: Got endpoints: latency-svc-nsht6 [1.257484876s] Jan 29 13:05:44.326: INFO: Created: latency-svc-rtds6 Jan 29 13:05:44.406: INFO: Got endpoints: latency-svc-rtds6 [1.375251291s] Jan 29 13:05:44.431: INFO: Created: latency-svc-2n4hf Jan 29 13:05:44.433: INFO: Got endpoints: latency-svc-2n4hf [1.261485186s] Jan 29 13:05:44.467: INFO: Created: latency-svc-j26wr Jan 29 13:05:44.472: INFO: Got endpoints: latency-svc-j26wr [1.285893393s] Jan 29 13:05:44.582: INFO: Created: latency-svc-n657m Jan 29 13:05:44.626: INFO: Got endpoints: latency-svc-n657m [1.299073106s] Jan 29 13:05:44.636: INFO: Created: latency-svc-75hqb Jan 29 13:05:44.664: INFO: Got endpoints: latency-svc-75hqb [1.323894967s] Jan 29 13:05:44.665: INFO: Created: latency-svc-mhvpx Jan 29 13:05:44.671: INFO: Got endpoints: latency-svc-mhvpx [1.154962539s] Jan 29 13:05:44.779: INFO: Created: latency-svc-vwtnr Jan 29 13:05:44.790: INFO: Got endpoints: latency-svc-vwtnr [1.260690668s] Jan 29 13:05:44.837: INFO: Created: latency-svc-dt5nm Jan 29 13:05:44.847: INFO: Got endpoints: latency-svc-dt5nm [1.131688226s] Jan 29 13:05:44.999: INFO: Created: latency-svc-wksnk Jan 29 13:05:45.011: INFO: Got endpoints: latency-svc-wksnk [1.270173926s] Jan 29 13:05:45.079: INFO: Created: latency-svc-whbzc Jan 29 13:05:45.217: INFO: Got endpoints: latency-svc-whbzc [1.41266176s] Jan 29 13:05:45.261: INFO: Created: latency-svc-tr47x Jan 29 13:05:45.277: INFO: Got endpoints: latency-svc-tr47x [1.363717751s] Jan 29 13:05:45.429: INFO: Created: latency-svc-hxbfc Jan 29 13:05:45.441: INFO: Got endpoints: latency-svc-hxbfc [1.454784341s] Jan 29 13:05:45.476: INFO: Created: latency-svc-fq5nt Jan 29 13:05:45.485: INFO: Got endpoints: latency-svc-fq5nt [1.384535743s] Jan 29 13:05:45.529: INFO: Created: latency-svc-rvlcs Jan 29 13:05:45.655: INFO: Got endpoints: latency-svc-rvlcs [1.423504916s] Jan 29 13:05:45.681: INFO: Created: latency-svc-ppzp6 Jan 29 13:05:45.705: INFO: Got endpoints: latency-svc-ppzp6 [1.427083974s] Jan 29 13:05:45.732: INFO: Created: latency-svc-cb667 Jan 29 13:05:45.852: INFO: Got endpoints: latency-svc-cb667 [1.446356246s] Jan 29 13:05:45.867: INFO: Created: latency-svc-gfmvt Jan 29 13:05:45.879: INFO: Got endpoints: latency-svc-gfmvt [1.445762136s] Jan 29 13:05:45.933: INFO: Created: latency-svc-lm824 Jan 29 13:05:45.952: INFO: Got endpoints: latency-svc-lm824 [1.479742056s] Jan 29 13:05:46.028: INFO: Created: latency-svc-46wwr Jan 29 13:05:46.029: INFO: Got endpoints: latency-svc-46wwr [1.402371648s] Jan 29 13:05:46.065: INFO: Created: latency-svc-fvwls Jan 29 13:05:46.077: INFO: Got endpoints: latency-svc-fvwls [1.412927863s] Jan 29 13:05:46.125: INFO: Created: latency-svc-2lkj6 Jan 29 13:05:46.218: INFO: Got endpoints: latency-svc-2lkj6 [1.546701927s] Jan 29 13:05:46.264: INFO: Created: latency-svc-hkgfh Jan 29 13:05:46.289: INFO: Got endpoints: latency-svc-hkgfh [1.498901476s] Jan 29 13:05:46.294: INFO: Created: latency-svc-d46v8 Jan 29 13:05:46.397: INFO: Got endpoints: latency-svc-d46v8 [1.549826767s] Jan 29 13:05:46.413: INFO: Created: latency-svc-bf5z4 Jan 29 13:05:46.422: INFO: Got endpoints: latency-svc-bf5z4 [1.410897853s] Jan 29 13:05:46.423: INFO: Latencies: [160.877833ms 298.414453ms 369.412113ms 384.002736ms 562.904277ms 825.126326ms 850.868399ms 1.077416649s 1.131688226s 1.154962539s 1.193725012s 1.230410126s 1.234964181s 1.257484876s 1.260297082s 1.260690668s 1.261485186s 1.263935853s 1.270173926s 1.285893393s 1.29055966s 1.293867343s 1.294128024s 1.296836176s 1.299073106s 1.302038904s 1.308686029s 1.309733567s 1.311562964s 1.317245095s 1.323894967s 1.324964143s 1.331399074s 1.33201728s 1.335254772s 1.335316078s 1.337835528s 1.344566527s 1.356271086s 1.361034031s 1.363717751s 1.366698148s 1.372434513s 1.375251291s 1.381483621s 1.384535743s 1.387121085s 1.393738464s 1.39631384s 1.396415608s 1.402371648s 1.410897853s 1.41266176s 1.412927863s 1.423504916s 1.427083974s 1.436179914s 1.438573981s 1.444044418s 1.445659002s 1.445704204s 1.445762136s 1.446356246s 1.454784341s 1.456739226s 1.458810855s 1.468148348s 1.479742056s 1.481268275s 1.49631808s 1.498901476s 1.51409339s 1.519263148s 1.521274785s 1.525589604s 1.542982262s 1.546701927s 1.549826767s 1.555073166s 1.559094668s 1.563502657s 1.565154403s 1.585302608s 1.598315737s 1.606082294s 1.616835304s 1.641253581s 1.664867377s 1.680745634s 1.68684282s 1.689004164s 1.706303938s 1.713065565s 1.722931138s 1.724288891s 1.727722987s 1.730803313s 1.734796245s 1.751919082s 1.761782217s 1.768233698s 1.774292369s 1.778959927s 1.799537241s 1.802891384s 1.804382105s 1.816742957s 1.819727546s 1.824186606s 1.868361953s 1.875741586s 1.903118589s 1.916353165s 1.923064642s 1.941421734s 1.951895673s 1.978717713s 1.97965029s 1.979955855s 1.986599884s 1.998068929s 2.002414904s 2.003387188s 2.015962247s 2.01987111s 2.027356128s 2.057974231s 2.069105362s 2.07185017s 2.072818862s 2.085528356s 2.086487753s 2.094600347s 2.094886635s 2.097884916s 2.108731147s 2.111874337s 2.113814452s 2.117192404s 2.123321099s 2.125948436s 2.12674636s 2.127672586s 2.139189249s 2.144437658s 2.149739367s 2.159977772s 2.166701674s 2.172555978s 2.177002403s 2.187159899s 2.196016675s 2.207901174s 2.213711602s 2.226575626s 2.233768935s 2.267966897s 2.27137621s 2.299715669s 2.301893268s 2.320794121s 2.338119102s 2.373083458s 2.386620265s 2.387507488s 2.449419645s 2.479545953s 2.491529346s 2.500261273s 2.50939325s 2.51969422s 2.525492896s 2.546753408s 2.576171014s 2.581805984s 2.593973303s 2.599252822s 2.603940515s 2.607039706s 2.609260255s 2.626663851s 2.65404745s 2.665726229s 2.695830833s 2.764966131s 2.915964998s 2.962364697s 3.004759448s 3.0328608s 3.039257906s 3.051708204s 3.130606659s 3.142589754s 3.14909918s 3.158461898s 3.162096154s 3.184889113s 3.237139902s 3.298338313s 3.456523032s] Jan 29 13:05:46.423: INFO: 50 %ile: 1.768233698s Jan 29 13:05:46.423: INFO: 90 %ile: 2.626663851s Jan 29 13:05:46.423: INFO: 99 %ile: 3.298338313s Jan 29 13:05:46.423: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 29 13:05:46.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-8049" for this suite. Jan 29 13:06:28.471: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 13:06:28.579: INFO: namespace svc-latency-8049 deletion completed in 42.141828471s • [SLOW TEST:76.971 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 29 13:06:28.580: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-secret-895b STEP: Creating a pod to test atomic-volume-subpath Jan 29 13:06:28.701: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-895b" in namespace "subpath-6080" to be "success or failure" Jan 29 13:06:28.715: INFO: Pod "pod-subpath-test-secret-895b": Phase="Pending", Reason="", readiness=false. Elapsed: 13.593082ms Jan 29 13:06:30.776: INFO: Pod "pod-subpath-test-secret-895b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07472886s Jan 29 13:06:32.793: INFO: Pod "pod-subpath-test-secret-895b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.091322376s Jan 29 13:06:34.807: INFO: Pod "pod-subpath-test-secret-895b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.105072349s Jan 29 13:06:36.842: INFO: Pod "pod-subpath-test-secret-895b": Phase="Running", Reason="", readiness=true. Elapsed: 8.140420853s Jan 29 13:06:38.856: INFO: Pod "pod-subpath-test-secret-895b": Phase="Running", Reason="", readiness=true. Elapsed: 10.154179047s Jan 29 13:06:41.348: INFO: Pod "pod-subpath-test-secret-895b": Phase="Running", Reason="", readiness=true. Elapsed: 12.646610892s Jan 29 13:06:43.356: INFO: Pod "pod-subpath-test-secret-895b": Phase="Running", Reason="", readiness=true. Elapsed: 14.654356878s Jan 29 13:06:45.368: INFO: Pod "pod-subpath-test-secret-895b": Phase="Running", Reason="", readiness=true. Elapsed: 16.666466144s Jan 29 13:06:47.378: INFO: Pod "pod-subpath-test-secret-895b": Phase="Running", Reason="", readiness=true. Elapsed: 18.676372693s Jan 29 13:06:49.391: INFO: Pod "pod-subpath-test-secret-895b": Phase="Running", Reason="", readiness=true. Elapsed: 20.689015754s Jan 29 13:06:51.402: INFO: Pod "pod-subpath-test-secret-895b": Phase="Running", Reason="", readiness=true. Elapsed: 22.700238316s Jan 29 13:06:53.419: INFO: Pod "pod-subpath-test-secret-895b": Phase="Running", Reason="", readiness=true. Elapsed: 24.717372775s Jan 29 13:06:55.429: INFO: Pod "pod-subpath-test-secret-895b": Phase="Running", Reason="", readiness=true. Elapsed: 26.72746622s Jan 29 13:06:57.440: INFO: Pod "pod-subpath-test-secret-895b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.738267077s STEP: Saw pod success Jan 29 13:06:57.440: INFO: Pod "pod-subpath-test-secret-895b" satisfied condition "success or failure" Jan 29 13:06:57.444: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-secret-895b container test-container-subpath-secret-895b: STEP: delete the pod Jan 29 13:06:57.583: INFO: Waiting for pod pod-subpath-test-secret-895b to disappear Jan 29 13:06:57.590: INFO: Pod pod-subpath-test-secret-895b no longer exists STEP: Deleting pod pod-subpath-test-secret-895b Jan 29 13:06:57.590: INFO: Deleting pod "pod-subpath-test-secret-895b" in namespace "subpath-6080" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 29 13:06:57.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6080" for this suite. Jan 29 13:07:03.746: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 13:07:03.963: INFO: namespace subpath-6080 deletion completed in 6.36105313s • [SLOW TEST:35.383 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 29 13:07:03.964: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 29 13:07:04.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-1834' Jan 29 13:07:06.517: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 29 13:07:06.517: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Jan 29 13:07:06.535: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Jan 29 13:07:06.549: INFO: scanned /root for discovery docs: Jan 29 13:07:06.549: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-1834' Jan 29 13:07:28.718: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jan 29 13:07:28.718: INFO: stdout: "Created e2e-test-nginx-rc-e842f3a24c5e5aef8daf25ee7509310a\nScaling up e2e-test-nginx-rc-e842f3a24c5e5aef8daf25ee7509310a from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-e842f3a24c5e5aef8daf25ee7509310a up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-e842f3a24c5e5aef8daf25ee7509310a to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Jan 29 13:07:28.718: INFO: stdout: "Created e2e-test-nginx-rc-e842f3a24c5e5aef8daf25ee7509310a\nScaling up e2e-test-nginx-rc-e842f3a24c5e5aef8daf25ee7509310a from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-e842f3a24c5e5aef8daf25ee7509310a up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-e842f3a24c5e5aef8daf25ee7509310a to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Jan 29 13:07:28.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-1834' Jan 29 13:07:28.882: INFO: stderr: "" Jan 29 13:07:28.882: INFO: stdout: "e2e-test-nginx-rc-e842f3a24c5e5aef8daf25ee7509310a-bhzcj " Jan 29 13:07:28.883: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-e842f3a24c5e5aef8daf25ee7509310a-bhzcj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1834' Jan 29 13:07:29.002: INFO: stderr: "" Jan 29 13:07:29.002: INFO: stdout: "true" Jan 29 13:07:29.003: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-e842f3a24c5e5aef8daf25ee7509310a-bhzcj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1834' Jan 29 13:07:29.099: INFO: stderr: "" Jan 29 13:07:29.099: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Jan 29 13:07:29.099: INFO: e2e-test-nginx-rc-e842f3a24c5e5aef8daf25ee7509310a-bhzcj is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522 Jan 29 13:07:29.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-1834' Jan 29 13:07:29.195: INFO: stderr: "" Jan 29 13:07:29.195: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 29 13:07:29.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1834" for this suite. Jan 29 13:07:51.276: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 13:07:51.398: INFO: namespace kubectl-1834 deletion completed in 22.19302643s • [SLOW TEST:47.434 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 29 13:07:51.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292 STEP: creating an rc Jan 29 13:07:51.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8826' Jan 29 13:07:52.129: INFO: stderr: "" Jan 29 13:07:52.129: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Waiting for Redis master to start. Jan 29 13:07:53.143: INFO: Selector matched 1 pods for map[app:redis] Jan 29 13:07:53.143: INFO: Found 0 / 1 Jan 29 13:07:54.140: INFO: Selector matched 1 pods for map[app:redis] Jan 29 13:07:54.140: INFO: Found 0 / 1 Jan 29 13:07:55.139: INFO: Selector matched 1 pods for map[app:redis] Jan 29 13:07:55.139: INFO: Found 0 / 1 Jan 29 13:07:56.146: INFO: Selector matched 1 pods for map[app:redis] Jan 29 13:07:56.146: INFO: Found 0 / 1 Jan 29 13:07:57.139: INFO: Selector matched 1 pods for map[app:redis] Jan 29 13:07:57.139: INFO: Found 0 / 1 Jan 29 13:07:58.146: INFO: Selector matched 1 pods for map[app:redis] Jan 29 13:07:58.146: INFO: Found 0 / 1 Jan 29 13:07:59.145: INFO: Selector matched 1 pods for map[app:redis] Jan 29 13:07:59.146: INFO: Found 0 / 1 Jan 29 13:08:00.140: INFO: Selector matched 1 pods for map[app:redis] Jan 29 13:08:00.140: INFO: Found 1 / 1 Jan 29 13:08:00.140: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 29 13:08:00.205: INFO: Selector matched 1 pods for map[app:redis] Jan 29 13:08:00.206: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Jan 29 13:08:00.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-mr8ll redis-master --namespace=kubectl-8826' Jan 29 13:08:00.491: INFO: stderr: "" Jan 29 13:08:00.492: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 29 Jan 13:07:58.662 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 29 Jan 13:07:58.663 # Server started, Redis version 3.2.12\n1:M 29 Jan 13:07:58.664 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 29 Jan 13:07:58.664 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Jan 29 13:08:00.492: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-mr8ll redis-master --namespace=kubectl-8826 --tail=1' Jan 29 13:08:00.719: INFO: stderr: "" Jan 29 13:08:00.719: INFO: stdout: "1:M 29 Jan 13:07:58.664 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Jan 29 13:08:00.720: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-mr8ll redis-master --namespace=kubectl-8826 --limit-bytes=1' Jan 29 13:08:00.861: INFO: stderr: "" Jan 29 13:08:00.861: INFO: stdout: " " STEP: exposing timestamps Jan 29 13:08:00.862: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-mr8ll redis-master --namespace=kubectl-8826 --tail=1 --timestamps' Jan 29 13:08:01.112: INFO: stderr: "" Jan 29 13:08:01.112: INFO: stdout: "2020-01-29T13:07:58.671846471Z 1:M 29 Jan 13:07:58.664 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Jan 29 13:08:03.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-mr8ll redis-master --namespace=kubectl-8826 --since=1s' Jan 29 13:08:03.865: INFO: stderr: "" Jan 29 13:08:03.866: INFO: stdout: "" Jan 29 13:08:03.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-mr8ll redis-master --namespace=kubectl-8826 --since=24h' Jan 29 13:08:04.131: INFO: stderr: "" Jan 29 13:08:04.131: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 29 Jan 13:07:58.662 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 29 Jan 13:07:58.663 # Server started, Redis version 3.2.12\n1:M 29 Jan 13:07:58.664 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 29 Jan 13:07:58.664 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 STEP: using delete to clean up resources Jan 29 13:08:04.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8826' Jan 29 13:08:04.262: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 29 13:08:04.262: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Jan 29 13:08:04.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-8826' Jan 29 13:08:04.445: INFO: stderr: "No resources found.\n" Jan 29 13:08:04.445: INFO: stdout: "" Jan 29 13:08:04.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-8826 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 29 13:08:04.602: INFO: stderr: "" Jan 29 13:08:04.603: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 29 13:08:04.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8826" for this suite. Jan 29 13:08:20.682: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 13:08:20.902: INFO: namespace kubectl-8826 deletion completed in 16.273361233s • [SLOW TEST:29.503 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 29 13:08:20.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-b24f2cbd-0ab8-43c8-9721-3c4af0a6e782 STEP: Creating a pod to test consume secrets Jan 29 13:08:21.055: INFO: Waiting up to 5m0s for pod "pod-secrets-857eca43-8fbe-4b3c-af25-9dde4d560014" in namespace "secrets-8957" to be "success or failure" Jan 29 13:08:21.072: INFO: Pod "pod-secrets-857eca43-8fbe-4b3c-af25-9dde4d560014": Phase="Pending", Reason="", readiness=false. Elapsed: 16.370284ms Jan 29 13:08:23.084: INFO: Pod "pod-secrets-857eca43-8fbe-4b3c-af25-9dde4d560014": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028420395s Jan 29 13:08:25.098: INFO: Pod "pod-secrets-857eca43-8fbe-4b3c-af25-9dde4d560014": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042681434s Jan 29 13:08:27.104: INFO: Pod "pod-secrets-857eca43-8fbe-4b3c-af25-9dde4d560014": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048421048s Jan 29 13:08:29.114: INFO: Pod "pod-secrets-857eca43-8fbe-4b3c-af25-9dde4d560014": Phase="Pending", Reason="", readiness=false. Elapsed: 8.058749099s Jan 29 13:08:31.120: INFO: Pod "pod-secrets-857eca43-8fbe-4b3c-af25-9dde4d560014": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.064322154s STEP: Saw pod success Jan 29 13:08:31.120: INFO: Pod "pod-secrets-857eca43-8fbe-4b3c-af25-9dde4d560014" satisfied condition "success or failure" Jan 29 13:08:31.124: INFO: Trying to get logs from node iruya-node pod pod-secrets-857eca43-8fbe-4b3c-af25-9dde4d560014 container secret-volume-test: STEP: delete the pod Jan 29 13:08:31.240: INFO: Waiting for pod pod-secrets-857eca43-8fbe-4b3c-af25-9dde4d560014 to disappear Jan 29 13:08:31.267: INFO: Pod pod-secrets-857eca43-8fbe-4b3c-af25-9dde4d560014 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 29 13:08:31.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8957" for this suite. Jan 29 13:08:37.302: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 13:08:37.401: INFO: namespace secrets-8957 deletion completed in 6.12515418s • [SLOW TEST:16.498 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 29 13:08:37.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Starting the proxy Jan 29 13:08:37.561: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix510557435/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 29 13:08:37.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1923" for this suite. Jan 29 13:08:43.718: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 13:08:43.917: INFO: namespace kubectl-1923 deletion completed in 6.22894821s • [SLOW TEST:6.515 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 29 13:08:43.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Jan 29 13:08:44.093: INFO: Waiting up to 5m0s for pod "pod-5f821e6b-320b-4909-8207-8874a36dec07" in namespace "emptydir-1697" to be "success or failure" Jan 29 13:08:44.103: INFO: Pod "pod-5f821e6b-320b-4909-8207-8874a36dec07": Phase="Pending", Reason="", readiness=false. Elapsed: 9.72101ms Jan 29 13:08:46.114: INFO: Pod "pod-5f821e6b-320b-4909-8207-8874a36dec07": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021335882s Jan 29 13:08:48.131: INFO: Pod "pod-5f821e6b-320b-4909-8207-8874a36dec07": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03758849s Jan 29 13:08:50.140: INFO: Pod "pod-5f821e6b-320b-4909-8207-8874a36dec07": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046809746s Jan 29 13:08:52.156: INFO: Pod "pod-5f821e6b-320b-4909-8207-8874a36dec07": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.062990464s STEP: Saw pod success Jan 29 13:08:52.156: INFO: Pod "pod-5f821e6b-320b-4909-8207-8874a36dec07" satisfied condition "success or failure" Jan 29 13:08:52.167: INFO: Trying to get logs from node iruya-node pod pod-5f821e6b-320b-4909-8207-8874a36dec07 container test-container: STEP: delete the pod Jan 29 13:08:52.424: INFO: Waiting for pod pod-5f821e6b-320b-4909-8207-8874a36dec07 to disappear Jan 29 13:08:52.474: INFO: Pod pod-5f821e6b-320b-4909-8207-8874a36dec07 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 29 13:08:52.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1697" for this suite. Jan 29 13:08:58.534: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 13:08:58.730: INFO: namespace emptydir-1697 deletion completed in 6.248843812s • [SLOW TEST:14.812 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 29 13:08:58.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3239.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3239.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3239.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3239.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3239.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-3239.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3239.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-3239.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3239.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-3239.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3239.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-3239.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3239.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 253.212.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.212.253_udp@PTR;check="$$(dig +tcp +noall +answer +search 253.212.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.212.253_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3239.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3239.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3239.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3239.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3239.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-3239.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3239.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-3239.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3239.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-3239.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3239.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-3239.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3239.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 253.212.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.212.253_udp@PTR;check="$$(dig +tcp +noall +answer +search 253.212.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.212.253_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 29 13:09:11.117: INFO: Unable to read wheezy_udp@dns-test-service.dns-3239.svc.cluster.local from pod dns-3239/dns-test-8a640e3f-9327-44e1-865f-99b46754d686: the server could not find the requested resource (get pods dns-test-8a640e3f-9327-44e1-865f-99b46754d686) Jan 29 13:09:11.127: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3239.svc.cluster.local from pod dns-3239/dns-test-8a640e3f-9327-44e1-865f-99b46754d686: the server could not find the requested resource (get pods dns-test-8a640e3f-9327-44e1-865f-99b46754d686) Jan 29 13:09:11.133: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3239.svc.cluster.local from pod dns-3239/dns-test-8a640e3f-9327-44e1-865f-99b46754d686: the server could not find the requested resource (get pods dns-test-8a640e3f-9327-44e1-865f-99b46754d686) Jan 29 13:09:11.138: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3239.svc.cluster.local from pod dns-3239/dns-test-8a640e3f-9327-44e1-865f-99b46754d686: the server could not find the requested resource (get pods dns-test-8a640e3f-9327-44e1-865f-99b46754d686) Jan 29 13:09:11.145: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-3239.svc.cluster.local from pod dns-3239/dns-test-8a640e3f-9327-44e1-865f-99b46754d686: the server could not find the requested resource (get pods dns-test-8a640e3f-9327-44e1-865f-99b46754d686) Jan 29 13:09:11.149: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-3239.svc.cluster.local from pod dns-3239/dns-test-8a640e3f-9327-44e1-865f-99b46754d686: the server could not find the requested resource (get pods dns-test-8a640e3f-9327-44e1-865f-99b46754d686) Jan 29 13:09:11.153: INFO: Unable to read wheezy_udp@PodARecord from pod dns-3239/dns-test-8a640e3f-9327-44e1-865f-99b46754d686: the server could not find the requested resource (get pods dns-test-8a640e3f-9327-44e1-865f-99b46754d686) Jan 29 13:09:11.157: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-3239/dns-test-8a640e3f-9327-44e1-865f-99b46754d686: the server could not find the requested resource (get pods dns-test-8a640e3f-9327-44e1-865f-99b46754d686) Jan 29 13:09:11.162: INFO: Unable to read 10.101.212.253_udp@PTR from pod dns-3239/dns-test-8a640e3f-9327-44e1-865f-99b46754d686: the server could not find the requested resource (get pods dns-test-8a640e3f-9327-44e1-865f-99b46754d686) Jan 29 13:09:11.167: INFO: Unable to read 10.101.212.253_tcp@PTR from pod dns-3239/dns-test-8a640e3f-9327-44e1-865f-99b46754d686: the server could not find the requested resource (get pods dns-test-8a640e3f-9327-44e1-865f-99b46754d686) Jan 29 13:09:11.172: INFO: Unable to read jessie_udp@dns-test-service.dns-3239.svc.cluster.local from pod dns-3239/dns-test-8a640e3f-9327-44e1-865f-99b46754d686: the server could not find the requested resource (get pods dns-test-8a640e3f-9327-44e1-865f-99b46754d686) Jan 29 13:09:11.177: INFO: Unable to read jessie_tcp@dns-test-service.dns-3239.svc.cluster.local from pod dns-3239/dns-test-8a640e3f-9327-44e1-865f-99b46754d686: the server could not find the requested resource (get pods dns-test-8a640e3f-9327-44e1-865f-99b46754d686) Jan 29 13:09:11.183: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3239.svc.cluster.local from pod dns-3239/dns-test-8a640e3f-9327-44e1-865f-99b46754d686: the server could not find the requested resource (get pods dns-test-8a640e3f-9327-44e1-865f-99b46754d686) Jan 29 13:09:11.194: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3239.svc.cluster.local from pod dns-3239/dns-test-8a640e3f-9327-44e1-865f-99b46754d686: the server could not find the requested resource (get pods dns-test-8a640e3f-9327-44e1-865f-99b46754d686) Jan 29 13:09:11.201: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-3239.svc.cluster.local from pod dns-3239/dns-test-8a640e3f-9327-44e1-865f-99b46754d686: the server could not find the requested resource (get pods dns-test-8a640e3f-9327-44e1-865f-99b46754d686) Jan 29 13:09:11.205: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-3239.svc.cluster.local from pod dns-3239/dns-test-8a640e3f-9327-44e1-865f-99b46754d686: the server could not find the requested resource (get pods dns-test-8a640e3f-9327-44e1-865f-99b46754d686) Jan 29 13:09:11.210: INFO: Unable to read jessie_udp@PodARecord from pod dns-3239/dns-test-8a640e3f-9327-44e1-865f-99b46754d686: the server could not find the requested resource (get pods dns-test-8a640e3f-9327-44e1-865f-99b46754d686) Jan 29 13:09:11.214: INFO: Unable to read jessie_tcp@PodARecord from pod dns-3239/dns-test-8a640e3f-9327-44e1-865f-99b46754d686: the server could not find the requested resource (get pods dns-test-8a640e3f-9327-44e1-865f-99b46754d686) Jan 29 13:09:11.218: INFO: Unable to read 10.101.212.253_udp@PTR from pod dns-3239/dns-test-8a640e3f-9327-44e1-865f-99b46754d686: the server could not find the requested resource (get pods dns-test-8a640e3f-9327-44e1-865f-99b46754d686) Jan 29 13:09:11.222: INFO: Unable to read 10.101.212.253_tcp@PTR from pod dns-3239/dns-test-8a640e3f-9327-44e1-865f-99b46754d686: the server could not find the requested resource (get pods dns-test-8a640e3f-9327-44e1-865f-99b46754d686) Jan 29 13:09:11.222: INFO: Lookups using dns-3239/dns-test-8a640e3f-9327-44e1-865f-99b46754d686 failed for: [wheezy_udp@dns-test-service.dns-3239.svc.cluster.local wheezy_tcp@dns-test-service.dns-3239.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3239.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3239.svc.cluster.local wheezy_udp@_http._tcp.test-service-2.dns-3239.svc.cluster.local wheezy_tcp@_http._tcp.test-service-2.dns-3239.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.101.212.253_udp@PTR 10.101.212.253_tcp@PTR jessie_udp@dns-test-service.dns-3239.svc.cluster.local jessie_tcp@dns-test-service.dns-3239.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3239.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3239.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-3239.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-3239.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord 10.101.212.253_udp@PTR 10.101.212.253_tcp@PTR] Jan 29 13:09:16.417: INFO: DNS probes using dns-3239/dns-test-8a640e3f-9327-44e1-865f-99b46754d686 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 29 13:09:16.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3239" for this suite. Jan 29 13:09:22.720: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 13:09:22.919: INFO: namespace dns-3239 deletion completed in 6.287790657s • [SLOW TEST:24.188 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 29 13:09:22.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 29 13:09:23.069: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"04ed249d-bbf6-450d-89a9-3977a6fd7e24", Controller:(*bool)(0xc000dc51a2), BlockOwnerDeletion:(*bool)(0xc000dc51a3)}} Jan 29 13:09:23.087: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"207a9ea3-c7e2-477e-874e-ce02df2d856b", Controller:(*bool)(0xc00286ff6a), BlockOwnerDeletion:(*bool)(0xc00286ff6b)}} Jan 29 13:09:23.174: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"e9efa9df-24b4-4cbe-a0da-5dee27792dea", Controller:(*bool)(0xc0028aa14a), BlockOwnerDeletion:(*bool)(0xc0028aa14b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 29 13:09:28.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5588" for this suite. Jan 29 13:09:34.260: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 13:09:34.391: INFO: namespace gc-5588 deletion completed in 6.166485369s • [SLOW TEST:11.472 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 29 13:09:34.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-4vvkv in namespace proxy-7685 I0129 13:09:34.598840 8 runners.go:180] Created replication controller with name: proxy-service-4vvkv, namespace: proxy-7685, replica count: 1 I0129 13:09:35.650819 8 runners.go:180] proxy-service-4vvkv Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0129 13:09:36.651817 8 runners.go:180] proxy-service-4vvkv Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0129 13:09:37.652958 8 runners.go:180] proxy-service-4vvkv Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0129 13:09:38.654270 8 runners.go:180] proxy-service-4vvkv Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0129 13:09:39.655165 8 runners.go:180] proxy-service-4vvkv Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0129 13:09:40.655857 8 runners.go:180] proxy-service-4vvkv Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0129 13:09:41.656476 8 runners.go:180] proxy-service-4vvkv Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0129 13:09:42.657324 8 runners.go:180] proxy-service-4vvkv Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0129 13:09:43.658157 8 runners.go:180] proxy-service-4vvkv Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0129 13:09:44.658908 8 runners.go:180] proxy-service-4vvkv Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0129 13:09:45.659852 8 runners.go:180] proxy-service-4vvkv Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0129 13:09:46.660684 8 runners.go:180] proxy-service-4vvkv Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0129 13:09:47.661603 8 runners.go:180] proxy-service-4vvkv Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0129 13:09:48.662282 8 runners.go:180] proxy-service-4vvkv Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 29 13:09:48.675: INFO: setup took 14.153640173s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Jan 29 13:09:48.717: INFO: (0) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-4vvkv-2ppxg:160/proxy/: foo (200; 40.363568ms) Jan 29 13:09:48.717: INFO: (0) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-4vvkv-2ppxg:162/proxy/: bar (200; 40.564339ms) Jan 29 13:09:48.717: INFO: (0) /api/v1/namespaces/proxy-7685/services/http:proxy-service-4vvkv:portname1/proxy/: foo (200; 40.471849ms) Jan 29 13:09:48.717: INFO: (0) /api/v1/namespaces/proxy-7685/pods/proxy-service-4vvkv-2ppxg/proxy/: test (200; 40.706491ms) Jan 29 13:09:48.717: INFO: (0) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-4vvkv-2ppxg:1080/proxy/: ... (200; 40.944335ms) Jan 29 13:09:48.717: INFO: (0) /api/v1/namespaces/proxy-7685/pods/proxy-service-4vvkv-2ppxg:160/proxy/: foo (200; 41.323432ms) Jan 29 13:09:48.718: INFO: (0) /api/v1/namespaces/proxy-7685/services/proxy-service-4vvkv:portname1/proxy/: foo (200; 42.266793ms) Jan 29 13:09:48.718: INFO: (0) /api/v1/namespaces/proxy-7685/pods/proxy-service-4vvkv-2ppxg:162/proxy/: bar (200; 42.135079ms) Jan 29 13:09:48.718: INFO: (0) /api/v1/namespaces/proxy-7685/pods/proxy-service-4vvkv-2ppxg:1080/proxy/: test<... (200; 42.98412ms) Jan 29 13:09:48.718: INFO: (0) /api/v1/namespaces/proxy-7685/services/proxy-service-4vvkv:portname2/proxy/: bar (200; 41.829015ms) Jan 29 13:09:48.718: INFO: (0) /api/v1/namespaces/proxy-7685/services/http:proxy-service-4vvkv:portname2/proxy/: bar (200; 42.12426ms) Jan 29 13:09:48.730: INFO: (0) /api/v1/namespaces/proxy-7685/services/https:proxy-service-4vvkv:tlsportname2/proxy/: tls qux (200; 54.438055ms) Jan 29 13:09:48.731: INFO: (0) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-4vvkv-2ppxg:460/proxy/: tls baz (200; 55.249814ms) Jan 29 13:09:48.731: INFO: (0) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-4vvkv-2ppxg:443/proxy/: test<... (200; 11.186886ms) Jan 29 13:09:48.748: INFO: (1) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-4vvkv-2ppxg:462/proxy/: tls qux (200; 11.714076ms) Jan 29 13:09:48.748: INFO: (1) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-4vvkv-2ppxg:160/proxy/: foo (200; 11.250634ms) Jan 29 13:09:48.748: INFO: (1) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-4vvkv-2ppxg:162/proxy/: bar (200; 11.210734ms) Jan 29 13:09:48.748: INFO: (1) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-4vvkv-2ppxg:1080/proxy/: ... (200; 11.689189ms) Jan 29 13:09:48.748: INFO: (1) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-4vvkv-2ppxg:443/proxy/: test (200; 16.155175ms) Jan 29 13:09:48.753: INFO: (1) /api/v1/namespaces/proxy-7685/services/http:proxy-service-4vvkv:portname2/proxy/: bar (200; 16.272458ms) Jan 29 13:09:48.753: INFO: (1) /api/v1/namespaces/proxy-7685/services/proxy-service-4vvkv:portname2/proxy/: bar (200; 16.917435ms) Jan 29 13:09:48.753: INFO: (1) /api/v1/namespaces/proxy-7685/services/proxy-service-4vvkv:portname1/proxy/: foo (200; 16.943826ms) Jan 29 13:09:48.753: INFO: (1) /api/v1/namespaces/proxy-7685/services/https:proxy-service-4vvkv:tlsportname2/proxy/: tls qux (200; 16.893909ms) Jan 29 13:09:48.754: INFO: (1) /api/v1/namespaces/proxy-7685/services/http:proxy-service-4vvkv:portname1/proxy/: foo (200; 17.399789ms) Jan 29 13:09:48.757: INFO: (1) /api/v1/namespaces/proxy-7685/services/https:proxy-service-4vvkv:tlsportname1/proxy/: tls baz (200; 19.952388ms) Jan 29 13:09:48.767: INFO: (2) /api/v1/namespaces/proxy-7685/pods/proxy-service-4vvkv-2ppxg/proxy/: test (200; 10.672305ms) Jan 29 13:09:48.767: INFO: (2) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-4vvkv-2ppxg:162/proxy/: bar (200; 10.804038ms) Jan 29 13:09:48.767: INFO: (2) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-4vvkv-2ppxg:1080/proxy/: ... (200; 10.881654ms) Jan 29 13:09:48.767: INFO: (2) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-4vvkv-2ppxg:462/proxy/: tls qux (200; 10.79586ms) Jan 29 13:09:48.768: INFO: (2) /api/v1/namespaces/proxy-7685/pods/proxy-service-4vvkv-2ppxg:162/proxy/: bar (200; 10.766837ms) Jan 29 13:09:48.768: INFO: (2) /api/v1/namespaces/proxy-7685/pods/proxy-service-4vvkv-2ppxg:1080/proxy/: test<... (200; 10.799055ms) Jan 29 13:09:48.768: INFO: (2) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-4vvkv-2ppxg:460/proxy/: tls baz (200; 11.294728ms) Jan 29 13:09:48.768: INFO: (2) /api/v1/namespaces/proxy-7685/pods/proxy-service-4vvkv-2ppxg:160/proxy/: foo (200; 11.460842ms) Jan 29 13:09:48.768: INFO: (2) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-4vvkv-2ppxg:443/proxy/: test<... (200; 13.038765ms) Jan 29 13:09:48.786: INFO: (3) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-4vvkv-2ppxg:160/proxy/: foo (200; 13.263241ms) Jan 29 13:09:48.786: INFO: (3) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-4vvkv-2ppxg:462/proxy/: tls qux (200; 13.731715ms) Jan 29 13:09:48.786: INFO: (3) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-4vvkv-2ppxg:443/proxy/: ... (200; 13.926337ms) Jan 29 13:09:48.787: INFO: (3) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-4vvkv-2ppxg:460/proxy/: tls baz (200; 13.921248ms) Jan 29 13:09:48.788: INFO: (3) /api/v1/namespaces/proxy-7685/services/http:proxy-service-4vvkv:portname2/proxy/: bar (200; 14.964081ms) Jan 29 13:09:48.788: INFO: (3) /api/v1/namespaces/proxy-7685/services/proxy-service-4vvkv:portname2/proxy/: bar (200; 15.577876ms) Jan 29 13:09:48.789: INFO: (3) /api/v1/namespaces/proxy-7685/pods/proxy-service-4vvkv-2ppxg/proxy/: test (200; 15.95238ms) Jan 29 13:09:48.789: INFO: (3) /api/v1/namespaces/proxy-7685/services/http:proxy-service-4vvkv:portname1/proxy/: foo (200; 15.983111ms) Jan 29 13:09:48.790: INFO: (3) /api/v1/namespaces/proxy-7685/pods/proxy-service-4vvkv-2ppxg:160/proxy/: foo (200; 16.917721ms) Jan 29 13:09:48.791: INFO: (3) /api/v1/namespaces/proxy-7685/services/https:proxy-service-4vvkv:tlsportname2/proxy/: tls qux (200; 17.623426ms) Jan 29 13:09:48.791: INFO: (3) /api/v1/namespaces/proxy-7685/services/proxy-service-4vvkv:portname1/proxy/: foo (200; 17.711265ms) Jan 29 13:09:48.791: INFO: (3) /api/v1/namespaces/proxy-7685/services/https:proxy-service-4vvkv:tlsportname1/proxy/: tls baz (200; 18.059248ms) Jan 29 13:09:48.794: INFO: (3) /api/v1/namespaces/proxy-7685/pods/proxy-service-4vvkv-2ppxg:162/proxy/: bar (200; 21.48457ms) Jan 29 13:09:48.802: INFO: (4) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-4vvkv-2ppxg:1080/proxy/: ... (200; 7.140914ms) Jan 29 13:09:48.802: INFO: (4) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-4vvkv-2ppxg:160/proxy/: foo (200; 7.679792ms) Jan 29 13:09:48.804: INFO: (4) /api/v1/namespaces/proxy-7685/pods/proxy-service-4vvkv-2ppxg:160/proxy/: foo (200; 9.351772ms) Jan 29 13:09:48.805: INFO: (4) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-4vvkv-2ppxg:443/proxy/: test (200; 10.487872ms) Jan 29 13:09:48.806: INFO: (4) /api/v1/namespaces/proxy-7685/pods/proxy-service-4vvkv-2ppxg:162/proxy/: bar (200; 11.576512ms) Jan 29 13:09:48.806: INFO: (4) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-4vvkv-2ppxg:462/proxy/: tls qux (200; 12.000635ms) Jan 29 13:09:48.806: INFO: (4) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-4vvkv-2ppxg:460/proxy/: tls baz (200; 12.030371ms) Jan 29 13:09:48.808: INFO: (4) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-4vvkv-2ppxg:162/proxy/: bar (200; 13.237887ms) Jan 29 13:09:48.809: INFO: (4) /api/v1/namespaces/proxy-7685/pods/proxy-service-4vvkv-2ppxg:1080/proxy/: test<... (200; 14.517228ms) Jan 29 13:09:48.811: INFO: (4) /api/v1/namespaces/proxy-7685/services/http:proxy-service-4vvkv:portname1/proxy/: foo (200; 16.681805ms) Jan 29 13:09:48.811: INFO: (4) /api/v1/namespaces/proxy-7685/services/proxy-service-4vvkv:portname2/proxy/: bar (200; 16.916756ms) Jan 29 13:09:48.812: INFO: (4) /api/v1/namespaces/proxy-7685/services/proxy-service-4vvkv:portname1/proxy/: foo (200; 17.142595ms) Jan 29 13:09:48.812: INFO: (4) /api/v1/namespaces/proxy-7685/services/https:proxy-service-4vvkv:tlsportname2/proxy/: tls qux (200; 17.333398ms) Jan 29 13:09:48.812: INFO: (4) /api/v1/namespaces/proxy-7685/services/https:proxy-service-4vvkv:tlsportname1/proxy/: tls baz (200; 17.466929ms) Jan 29 13:09:48.813: INFO: (4) /api/v1/namespaces/proxy-7685/services/http:proxy-service-4vvkv:portname2/proxy/: bar (200; 18.70041ms) Jan 29 13:09:48.824: INFO: (5) /api/v1/namespaces/proxy-7685/pods/proxy-service-4vvkv-2ppxg:160/proxy/: foo (200; 10.837495ms) Jan 29 13:09:48.824: INFO: (5) /api/v1/namespaces/proxy-7685/pods/proxy-service-4vvkv-2ppxg/proxy/: test (200; 11.025659ms) Jan 29 13:09:48.825: INFO: (5) /api/v1/namespaces/proxy-7685/pods/proxy-service-4vvkv-2ppxg:162/proxy/: bar (200; 12.074411ms) Jan 29 13:09:48.826: INFO: (5) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-4vvkv-2ppxg:462/proxy/: tls qux (200; 12.365651ms) Jan 29 13:09:48.826: INFO: (5) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-4vvkv-2ppxg:443/proxy/: ... (200; 13.371952ms) Jan 29 13:09:48.827: INFO: (5) /api/v1/namespaces/proxy-7685/services/http:proxy-service-4vvkv:portname1/proxy/: foo (200; 13.533464ms) Jan 29 13:09:48.827: INFO: (5) /api/v1/namespaces/proxy-7685/services/https:proxy-service-4vvkv:tlsportname1/proxy/: tls baz (200; 13.730245ms) Jan 29 13:09:48.827: INFO: (5) /api/v1/namespaces/proxy-7685/services/proxy-service-4vvkv:portname1/proxy/: foo (200; 13.690228ms) Jan 29 13:09:48.827: INFO: (5) /api/v1/namespaces/proxy-7685/services/https:proxy-service-4vvkv:tlsportname2/proxy/: tls qux (200; 13.819213ms) Jan 29 13:09:48.828: INFO: (5) /api/v1/namespaces/proxy-7685/services/proxy-service-4vvkv:portname2/proxy/: bar (200; 14.508013ms) Jan 29 13:09:48.828: INFO: (5) /api/v1/namespaces/proxy-7685/services/http:proxy-service-4vvkv:portname2/proxy/: bar (200; 14.516661ms) Jan 29 13:09:48.828: INFO: (5) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-4vvkv-2ppxg:460/proxy/: tls baz (200; 14.524247ms) Jan 29 13:09:48.828: INFO: (5) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-4vvkv-2ppxg:160/proxy/: foo (200; 14.599028ms) Jan 29 13:09:48.829: INFO: (5) /api/v1/namespaces/proxy-7685/pods/proxy-service-4vvkv-2ppxg:1080/proxy/: test<... (200; 15.752667ms) Jan 29 13:09:48.838: INFO: (6) /api/v1/namespaces/proxy-7685/services/https:proxy-service-4vvkv:tlsportname2/proxy/: tls qux (200; 8.857559ms) Jan 29 13:09:48.838: INFO: (6) /api/v1/namespaces/proxy-7685/services/http:proxy-service-4vvkv:portname2/proxy/: bar (200; 9.144964ms) Jan 29 13:09:48.838: INFO: (6) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-4vvkv-2ppxg:460/proxy/: tls baz (200; 9.283596ms) Jan 29 13:09:48.839: INFO: (6) /api/v1/namespaces/proxy-7685/services/proxy-service-4vvkv:portname2/proxy/: bar (200; 9.496999ms) Jan 29 13:09:48.839: INFO: (6) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-4vvkv-2ppxg:443/proxy/: test<... (200; 9.949365ms) Jan 29 13:09:48.839: INFO: (6) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-4vvkv-2ppxg:1080/proxy/: ... (200; 10.111174ms) Jan 29 13:09:48.839: INFO: (6) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-4vvkv-2ppxg:462/proxy/: tls qux (200; 10.223331ms) Jan 29 13:09:48.839: INFO: (6) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-4vvkv-2ppxg:162/proxy/: bar (200; 10.321228ms) Jan 29 13:09:48.840: INFO: (6) /api/v1/namespaces/proxy-7685/pods/proxy-service-4vvkv-2ppxg/proxy/: test (200; 10.537067ms) Jan 29 13:09:48.840: INFO: (6) /api/v1/namespaces/proxy-7685/pods/proxy-service-4vvkv-2ppxg:162/proxy/: bar (200; 11.083799ms) Jan 29 13:09:48.841: INFO: (6) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-4vvkv-2ppxg:160/proxy/: foo (200; 12.006162ms) Jan 29 13:09:48.841: INFO: (6) /api/v1/namespaces/proxy-7685/services/https:proxy-service-4vvkv:tlsportname1/proxy/: tls baz (200; 11.854209ms) Jan 29 13:09:48.842: INFO: (6) /api/v1/namespaces/proxy-7685/services/proxy-service-4vvkv:portname1/proxy/: foo (200; 12.840019ms) Jan 29 13:09:48.852: INFO: (7) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-4vvkv-2ppxg:443/proxy/: test (200; 12.901896ms) Jan 29 13:09:48.856: INFO: (7) /api/v1/namespaces/proxy-7685/services/proxy-service-4vvkv:portname2/proxy/: bar (200; 13.333173ms) Jan 29 13:09:48.856: INFO: (7) /api/v1/namespaces/proxy-7685/services/proxy-service-4vvkv:portname1/proxy/: foo (200; 13.400372ms) Jan 29 13:09:48.856: INFO: (7) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-4vvkv-2ppxg:462/proxy/: tls qux (200; 13.478498ms) Jan 29 13:09:48.856: INFO: (7) /api/v1/namespaces/proxy-7685/services/http:proxy-service-4vvkv:portname2/proxy/: bar (200; 14.028542ms) Jan 29 13:09:48.857: INFO: (7) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-4vvkv-2ppxg:162/proxy/: bar (200; 14.414784ms) Jan 29 13:09:48.857: INFO: (7) /api/v1/namespaces/proxy-7685/pods/proxy-service-4vvkv-2ppxg:1080/proxy/: test<... (200; 14.41085ms) Jan 29 13:09:48.857: INFO: (7) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-4vvkv-2ppxg:460/proxy/: tls baz (200; 14.4801ms) Jan 29 13:09:48.857: INFO: (7) /api/v1/namespaces/proxy-7685/services/http:proxy-service-4vvkv:portname1/proxy/: foo (200; 14.853948ms) Jan 29 13:09:48.858: INFO: (7) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-4vvkv-2ppxg:1080/proxy/: ... (200; 15.350633ms) Jan 29 13:09:48.871: INFO: (8) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-4vvkv-2ppxg:162/proxy/: bar (200; 13.389635ms) Jan 29 13:09:48.871: INFO: (8) /api/v1/namespaces/proxy-7685/pods/proxy-service-4vvkv-2ppxg/proxy/: test (200; 13.527213ms) Jan 29 13:09:48.872: INFO: (8) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-4vvkv-2ppxg:460/proxy/: tls baz (200; 14.20855ms) Jan 29 13:09:48.872: INFO: (8) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-4vvkv-2ppxg:443/proxy/: test<... (200; 14.311735ms) Jan 29 13:09:48.872: INFO: (8) /api/v1/namespaces/proxy-7685/pods/proxy-service-4vvkv-2ppxg:162/proxy/: bar (200; 14.395521ms) Jan 29 13:09:48.876: INFO: (8) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-4vvkv-2ppxg:1080/proxy/: ... (200; 18.010478ms) Jan 29 13:09:48.878: INFO: (8) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-4vvkv-2ppxg:160/proxy/: foo (200; 19.955712ms) Jan 29 13:09:48.878: INFO: (8) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-4vvkv-2ppxg:462/proxy/: tls qux (200; 20.214078ms) Jan 29 13:09:48.878: INFO: (8) /api/v1/namespaces/proxy-7685/pods/proxy-service-4vvkv-2ppxg:160/proxy/: foo (200; 20.426057ms) Jan 29 13:09:48.880: INFO: (8) /api/v1/namespaces/proxy-7685/services/proxy-service-4vvkv:portname1/proxy/: foo (200; 22.323924ms) Jan 29 13:09:48.881: INFO: (8) /api/v1/namespaces/proxy-7685/services/http:proxy-service-4vvkv:portname2/proxy/: bar (200; 22.921113ms) Jan 29 13:09:48.881: INFO: (8) /api/v1/namespaces/proxy-7685/services/proxy-service-4vvkv:portname2/proxy/: bar (200; 22.906212ms) Jan 29 13:09:48.881: INFO: (8) /api/v1/namespaces/proxy-7685/services/https:proxy-service-4vvkv:tlsportname2/proxy/: tls qux (200; 23.295168ms) Jan 29 13:09:48.882: INFO: (8) /api/v1/namespaces/proxy-7685/services/https:proxy-service-4vvkv:tlsportname1/proxy/: tls baz (200; 24.559047ms) Jan 29 13:09:48.885: INFO: (8) /api/v1/namespaces/proxy-7685/services/http:proxy-service-4vvkv:portname1/proxy/: foo (200; 26.72795ms) Jan 29 13:09:48.892: INFO: (9) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-4vvkv-2ppxg:443/proxy/: test<... (200; 7.651594ms) Jan 29 13:09:48.893: INFO: (9) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-4vvkv-2ppxg:460/proxy/: tls baz (200; 7.547558ms) Jan 29 13:09:48.893: INFO: (9) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-4vvkv-2ppxg:1080/proxy/: ... (200; 8.051425ms) Jan 29 13:09:48.893: INFO: (9) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-4vvkv-2ppxg:160/proxy/: foo (200; 7.940325ms) Jan 29 13:09:48.893: INFO: (9) /api/v1/namespaces/proxy-7685/pods/proxy-service-4vvkv-2ppxg/proxy/: test (200; 8.025487ms) Jan 29 13:09:48.895: INFO: (9) /api/v1/namespaces/proxy-7685/services/proxy-service-4vvkv:portname1/proxy/: foo (200; 10.073147ms) Jan 29 13:09:48.895: INFO: (9) /api/v1/namespaces/proxy-7685/services/proxy-service-4vvkv:portname2/proxy/: bar (200; 10.357012ms) Jan 29 13:09:48.895: INFO: (9) /api/v1/namespaces/proxy-7685/services/https:proxy-service-4vvkv:tlsportname2/proxy/: tls qux (200; 10.586367ms) Jan 29 13:09:48.895: INFO: (9) /api/v1/namespaces/proxy-7685/services/http:proxy-service-4vvkv:portname1/proxy/: foo (200; 10.401801ms) Jan 29 13:09:48.895: INFO: (9) /api/v1/namespaces/proxy-7685/services/https:proxy-service-4vvkv:tlsportname1/proxy/: tls baz (200; 10.516697ms) Jan 29 13:09:48.896: INFO: (9) /api/v1/namespaces/proxy-7685/services/http:proxy-service-4vvkv:portname2/proxy/: bar (200; 10.510611ms) Jan 29 13:09:48.903: INFO: (10) /api/v1/namespaces/proxy-7685/pods/proxy-service-4vvkv-2ppxg/proxy/: test (200; 7.104278ms) Jan 29 13:09:48.903: INFO: (10) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-4vvkv-2ppxg:160/proxy/: foo (200; 7.754826ms) Jan 29 13:09:48.904: INFO: (10) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-4vvkv-2ppxg:162/proxy/: bar (200; 7.890645ms) Jan 29 13:09:48.904: INFO: (10) /api/v1/namespaces/proxy-7685/pods/proxy-service-4vvkv-2ppxg:162/proxy/: bar (200; 7.920667ms) Jan 29 13:09:48.904: INFO: (10) /api/v1/namespaces/proxy-7685/pods/proxy-service-4vvkv-2ppxg:1080/proxy/: test<... (200; 7.982878ms) Jan 29 13:09:48.904: INFO: (10) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-4vvkv-2ppxg:1080/proxy/: ... (200; 8.051942ms) Jan 29 13:09:48.904: INFO: (10) /api/v1/namespaces/proxy-7685/services/http:proxy-service-4vvkv:portname1/proxy/: foo (200; 8.210544ms) Jan 29 13:09:48.904: INFO: (10) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-4vvkv-2ppxg:460/proxy/: tls baz (200; 8.464407ms) Jan 29 13:09:48.904: INFO: (10) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-4vvkv-2ppxg:443/proxy/: ... (200; 10.138313ms) Jan 29 13:09:48.919: INFO: (11) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-4vvkv-2ppxg:160/proxy/: foo (200; 11.668524ms) Jan 29 13:09:48.921: INFO: (11) /api/v1/namespaces/proxy-7685/pods/proxy-service-4vvkv-2ppxg:160/proxy/: foo (200; 14.005376ms) Jan 29 13:09:48.922: INFO: (11) /api/v1/namespaces/proxy-7685/pods/proxy-service-4vvkv-2ppxg:1080/proxy/: test<... (200; 14.727076ms) Jan 29 13:09:48.923: INFO: (11) /api/v1/namespaces/proxy-7685/services/https:proxy-service-4vvkv:tlsportname2/proxy/: tls qux (200; 15.56945ms) Jan 29 13:09:48.923: INFO: (11) /api/v1/namespaces/proxy-7685/services/http:proxy-service-4vvkv:portname1/proxy/: foo (200; 15.755119ms) Jan 29 13:09:48.923: INFO: (11) /api/v1/namespaces/proxy-7685/pods/proxy-service-4vvkv-2ppxg/proxy/: test (200; 15.754204ms) Jan 29 13:09:48.923: INFO: (11) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-4vvkv-2ppxg:462/proxy/: tls qux (200; 15.63436ms) Jan 29 13:09:48.923: INFO: (11) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-4vvkv-2ppxg:443/proxy/: test<... (200; 14.130257ms) Jan 29 13:09:48.946: INFO: (12) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-4vvkv-2ppxg:1080/proxy/: ... (200; 17.300522ms) Jan 29 13:09:48.946: INFO: (12) /api/v1/namespaces/proxy-7685/services/https:proxy-service-4vvkv:tlsportname2/proxy/: tls qux (200; 17.066889ms) Jan 29 13:09:48.947: INFO: (12) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-4vvkv-2ppxg:162/proxy/: bar (200; 17.142276ms) Jan 29 13:09:48.947: INFO: (12) /api/v1/namespaces/proxy-7685/services/http:proxy-service-4vvkv:portname1/proxy/: foo (200; 17.173325ms) Jan 29 13:09:48.947: INFO: (12) /api/v1/namespaces/proxy-7685/pods/proxy-service-4vvkv-2ppxg/proxy/: test (200; 17.239595ms) Jan 29 13:09:48.947: INFO: (12) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-4vvkv-2ppxg:462/proxy/: tls qux (200; 17.590705ms) Jan 29 13:09:48.947: INFO: (12) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-4vvkv-2ppxg:160/proxy/: foo (200; 17.480847ms) Jan 29 13:09:48.947: INFO: (12) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-4vvkv-2ppxg:443/proxy/: test<... (200; 11.102666ms) Jan 29 13:09:49.002: INFO: (13) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-4vvkv-2ppxg:160/proxy/: foo (200; 11.574133ms) Jan 29 13:09:49.002: INFO: (13) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-4vvkv-2ppxg:443/proxy/: ... (200; 10.990601ms) Jan 29 13:09:49.002: INFO: (13) /api/v1/namespaces/proxy-7685/pods/proxy-service-4vvkv-2ppxg:160/proxy/: foo (200; 11.508386ms) Jan 29 13:09:49.002: INFO: (13) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-4vvkv-2ppxg:162/proxy/: bar (200; 11.856217ms) Jan 29 13:09:49.003: INFO: (13) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-4vvkv-2ppxg:460/proxy/: tls baz (200; 12.398968ms) Jan 29 13:09:49.003: INFO: (13) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-4vvkv-2ppxg:462/proxy/: tls qux (200; 12.468922ms) Jan 29 13:09:49.003: INFO: (13) /api/v1/namespaces/proxy-7685/services/https:proxy-service-4vvkv:tlsportname1/proxy/: tls baz (200; 12.715531ms) Jan 29 13:09:49.004: INFO: (13) /api/v1/namespaces/proxy-7685/services/http:proxy-service-4vvkv:portname1/proxy/: foo (200; 13.355647ms) Jan 29 13:09:49.004: INFO: (13) /api/v1/namespaces/proxy-7685/services/proxy-service-4vvkv:portname2/proxy/: bar (200; 13.057973ms) Jan 29 13:09:49.004: INFO: (13) /api/v1/namespaces/proxy-7685/services/proxy-service-4vvkv:portname1/proxy/: foo (200; 13.244784ms) Jan 29 13:09:49.005: INFO: (13) /api/v1/namespaces/proxy-7685/services/https:proxy-service-4vvkv:tlsportname2/proxy/: tls qux (200; 14.591162ms) Jan 29 13:09:49.005: INFO: (13) /api/v1/namespaces/proxy-7685/services/http:proxy-service-4vvkv:portname2/proxy/: bar (200; 14.00476ms) Jan 29 13:09:49.005: INFO: (13) /api/v1/namespaces/proxy-7685/pods/proxy-service-4vvkv-2ppxg/proxy/: test (200; 14.504394ms) Jan 29 13:09:49.013: INFO: (14) /api/v1/namespaces/proxy-7685/pods/proxy-service-4vvkv-2ppxg:1080/proxy/: test<... (200; 7.168577ms) Jan 29 13:09:49.013: INFO: (14) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-4vvkv-2ppxg:162/proxy/: bar (200; 7.636454ms) Jan 29 13:09:49.013: INFO: (14) /api/v1/namespaces/proxy-7685/pods/proxy-service-4vvkv-2ppxg:160/proxy/: foo (200; 8.016747ms) Jan 29 13:09:49.014: INFO: (14) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-4vvkv-2ppxg:160/proxy/: foo (200; 8.829593ms) Jan 29 13:09:49.015: INFO: (14) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-4vvkv-2ppxg:1080/proxy/: ... (200; 9.183899ms) Jan 29 13:09:49.015: INFO: (14) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-4vvkv-2ppxg:443/proxy/: test (200; 13.760753ms) Jan 29 13:09:49.020: INFO: (14) /api/v1/namespaces/proxy-7685/services/proxy-service-4vvkv:portname2/proxy/: bar (200; 14.679507ms) Jan 29 13:09:49.020: INFO: (14) /api/v1/namespaces/proxy-7685/services/proxy-service-4vvkv:portname1/proxy/: foo (200; 14.916535ms) Jan 29 13:09:49.029: INFO: (15) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-4vvkv-2ppxg:1080/proxy/: ... (200; 7.965244ms) Jan 29 13:09:49.030: INFO: (15) /api/v1/namespaces/proxy-7685/services/http:proxy-service-4vvkv:portname2/proxy/: bar (200; 8.911911ms) Jan 29 13:09:49.030: INFO: (15) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-4vvkv-2ppxg:160/proxy/: foo (200; 9.809879ms) Jan 29 13:09:49.030: INFO: (15) /api/v1/namespaces/proxy-7685/pods/proxy-service-4vvkv-2ppxg:162/proxy/: bar (200; 9.684953ms) Jan 29 13:09:49.031: INFO: (15) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-4vvkv-2ppxg:460/proxy/: tls baz (200; 10.136836ms) Jan 29 13:09:49.031: INFO: (15) /api/v1/namespaces/proxy-7685/pods/proxy-service-4vvkv-2ppxg:1080/proxy/: test<... (200; 10.207453ms) Jan 29 13:09:49.031: INFO: (15) /api/v1/namespaces/proxy-7685/services/https:proxy-service-4vvkv:tlsportname1/proxy/: tls baz (200; 10.345857ms) Jan 29 13:09:49.031: INFO: (15) /api/v1/namespaces/proxy-7685/pods/proxy-service-4vvkv-2ppxg:160/proxy/: foo (200; 10.451587ms) Jan 29 13:09:49.031: INFO: (15) /api/v1/namespaces/proxy-7685/pods/proxy-service-4vvkv-2ppxg/proxy/: test (200; 10.624526ms) Jan 29 13:09:49.032: INFO: (15) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-4vvkv-2ppxg:462/proxy/: tls qux (200; 11.664232ms) Jan 29 13:09:49.032: INFO: (15) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-4vvkv-2ppxg:443/proxy/: test (200; 10.245646ms) Jan 29 13:09:49.045: INFO: (16) /api/v1/namespaces/proxy-7685/pods/proxy-service-4vvkv-2ppxg:162/proxy/: bar (200; 11.192966ms) Jan 29 13:09:49.045: INFO: (16) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-4vvkv-2ppxg:160/proxy/: foo (200; 11.545957ms) Jan 29 13:09:49.046: INFO: (16) /api/v1/namespaces/proxy-7685/services/http:proxy-service-4vvkv:portname1/proxy/: foo (200; 11.879351ms) Jan 29 13:09:49.046: INFO: (16) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-4vvkv-2ppxg:443/proxy/: test<... (200; 15.756161ms) Jan 29 13:09:49.050: INFO: (16) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-4vvkv-2ppxg:460/proxy/: tls baz (200; 15.66347ms) Jan 29 13:09:49.050: INFO: (16) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-4vvkv-2ppxg:462/proxy/: tls qux (200; 15.794209ms) Jan 29 13:09:49.050: INFO: (16) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-4vvkv-2ppxg:1080/proxy/: ... (200; 15.896115ms) Jan 29 13:09:49.050: INFO: (16) /api/v1/namespaces/proxy-7685/pods/proxy-service-4vvkv-2ppxg:160/proxy/: foo (200; 15.983272ms) Jan 29 13:09:49.061: INFO: (17) /api/v1/namespaces/proxy-7685/pods/proxy-service-4vvkv-2ppxg:160/proxy/: foo (200; 11.428196ms) Jan 29 13:09:49.061: INFO: (17) /api/v1/namespaces/proxy-7685/pods/proxy-service-4vvkv-2ppxg/proxy/: test (200; 11.37805ms) Jan 29 13:09:49.062: INFO: (17) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-4vvkv-2ppxg:1080/proxy/: ... (200; 12.12077ms) Jan 29 13:09:49.063: INFO: (17) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-4vvkv-2ppxg:160/proxy/: foo (200; 13.470636ms) Jan 29 13:09:49.063: INFO: (17) /api/v1/namespaces/proxy-7685/services/https:proxy-service-4vvkv:tlsportname1/proxy/: tls baz (200; 13.35455ms) Jan 29 13:09:49.063: INFO: (17) /api/v1/namespaces/proxy-7685/pods/proxy-service-4vvkv-2ppxg:162/proxy/: bar (200; 13.423378ms) Jan 29 13:09:49.063: INFO: (17) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-4vvkv-2ppxg:462/proxy/: tls qux (200; 13.387115ms) Jan 29 13:09:49.063: INFO: (17) /api/v1/namespaces/proxy-7685/services/http:proxy-service-4vvkv:portname1/proxy/: foo (200; 13.566789ms) Jan 29 13:09:49.064: INFO: (17) /api/v1/namespaces/proxy-7685/services/https:proxy-service-4vvkv:tlsportname2/proxy/: tls qux (200; 13.578872ms) Jan 29 13:09:49.064: INFO: (17) /api/v1/namespaces/proxy-7685/services/http:proxy-service-4vvkv:portname2/proxy/: bar (200; 13.90425ms) Jan 29 13:09:49.064: INFO: (17) /api/v1/namespaces/proxy-7685/services/proxy-service-4vvkv:portname2/proxy/: bar (200; 14.170492ms) Jan 29 13:09:49.064: INFO: (17) /api/v1/namespaces/proxy-7685/pods/proxy-service-4vvkv-2ppxg:1080/proxy/: test<... (200; 14.542386ms) Jan 29 13:09:49.065: INFO: (17) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-4vvkv-2ppxg:162/proxy/: bar (200; 15.490169ms) Jan 29 13:09:49.066: INFO: (17) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-4vvkv-2ppxg:460/proxy/: tls baz (200; 15.740295ms) Jan 29 13:09:49.066: INFO: (17) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-4vvkv-2ppxg:443/proxy/: ... (200; 10.225967ms) Jan 29 13:09:49.077: INFO: (18) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-4vvkv-2ppxg:460/proxy/: tls baz (200; 10.475355ms) Jan 29 13:09:49.077: INFO: (18) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-4vvkv-2ppxg:462/proxy/: tls qux (200; 10.649538ms) Jan 29 13:09:49.077: INFO: (18) /api/v1/namespaces/proxy-7685/pods/proxy-service-4vvkv-2ppxg:160/proxy/: foo (200; 10.616546ms) Jan 29 13:09:49.077: INFO: (18) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-4vvkv-2ppxg:160/proxy/: foo (200; 10.687918ms) Jan 29 13:09:49.077: INFO: (18) /api/v1/namespaces/proxy-7685/pods/proxy-service-4vvkv-2ppxg:162/proxy/: bar (200; 10.711575ms) Jan 29 13:09:49.077: INFO: (18) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-4vvkv-2ppxg:443/proxy/: test<... (200; 11.414134ms) Jan 29 13:09:49.078: INFO: (18) /api/v1/namespaces/proxy-7685/services/http:proxy-service-4vvkv:portname2/proxy/: bar (200; 11.781949ms) Jan 29 13:09:49.078: INFO: (18) /api/v1/namespaces/proxy-7685/services/proxy-service-4vvkv:portname1/proxy/: foo (200; 12.104961ms) Jan 29 13:09:49.078: INFO: (18) /api/v1/namespaces/proxy-7685/services/http:proxy-service-4vvkv:portname1/proxy/: foo (200; 11.931544ms) Jan 29 13:09:49.078: INFO: (18) /api/v1/namespaces/proxy-7685/pods/proxy-service-4vvkv-2ppxg/proxy/: test (200; 12.233592ms) Jan 29 13:09:49.078: INFO: (18) /api/v1/namespaces/proxy-7685/services/https:proxy-service-4vvkv:tlsportname1/proxy/: tls baz (200; 12.36299ms) Jan 29 13:09:49.079: INFO: (18) /api/v1/namespaces/proxy-7685/services/proxy-service-4vvkv:portname2/proxy/: bar (200; 12.513939ms) Jan 29 13:09:49.079: INFO: (18) /api/v1/namespaces/proxy-7685/services/https:proxy-service-4vvkv:tlsportname2/proxy/: tls qux (200; 12.354828ms) Jan 29 13:09:49.080: INFO: (18) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-4vvkv-2ppxg:162/proxy/: bar (200; 13.575344ms) Jan 29 13:09:49.085: INFO: (19) /api/v1/namespaces/proxy-7685/pods/proxy-service-4vvkv-2ppxg:160/proxy/: foo (200; 4.980683ms) Jan 29 13:09:49.085: INFO: (19) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-4vvkv-2ppxg:443/proxy/: test (200; 5.480294ms) Jan 29 13:09:49.085: INFO: (19) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-4vvkv-2ppxg:160/proxy/: foo (200; 5.503478ms) Jan 29 13:09:49.085: INFO: (19) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-4vvkv-2ppxg:1080/proxy/: ... (200; 5.763625ms) Jan 29 13:09:49.085: INFO: (19) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-4vvkv-2ppxg:462/proxy/: tls qux (200; 5.881855ms) Jan 29 13:09:49.086: INFO: (19) /api/v1/namespaces/proxy-7685/pods/http:proxy-service-4vvkv-2ppxg:162/proxy/: bar (200; 6.015019ms) Jan 29 13:09:49.086: INFO: (19) /api/v1/namespaces/proxy-7685/pods/https:proxy-service-4vvkv-2ppxg:460/proxy/: tls baz (200; 6.042502ms) Jan 29 13:09:49.086: INFO: (19) /api/v1/namespaces/proxy-7685/pods/proxy-service-4vvkv-2ppxg:1080/proxy/: test<... (200; 6.070491ms) Jan 29 13:09:49.104: INFO: (19) /api/v1/namespaces/proxy-7685/services/http:proxy-service-4vvkv:portname1/proxy/: foo (200; 24.832694ms) Jan 29 13:09:49.106: INFO: (19) /api/v1/namespaces/proxy-7685/services/proxy-service-4vvkv:portname1/proxy/: foo (200; 25.814474ms) Jan 29 13:09:49.106: INFO: (19) /api/v1/namespaces/proxy-7685/services/https:proxy-service-4vvkv:tlsportname2/proxy/: tls qux (200; 26.404149ms) Jan 29 13:09:49.106: INFO: (19) /api/v1/namespaces/proxy-7685/services/http:proxy-service-4vvkv:portname2/proxy/: bar (200; 26.513054ms) Jan 29 13:09:49.106: INFO: (19) /api/v1/namespaces/proxy-7685/services/https:proxy-service-4vvkv:tlsportname1/proxy/: tls baz (200; 26.673839ms) Jan 29 13:09:49.106: INFO: (19) /api/v1/namespaces/proxy-7685/services/proxy-service-4vvkv:portname2/proxy/: bar (200; 26.768052ms) STEP: deleting ReplicationController proxy-service-4vvkv in namespace proxy-7685, will wait for the garbage collector to delete the pods Jan 29 13:09:49.166: INFO: Deleting ReplicationController proxy-service-4vvkv took: 7.259544ms Jan 29 13:09:49.267: INFO: Terminating ReplicationController proxy-service-4vvkv pods took: 100.636089ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 29 13:09:56.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-7685" for this suite. Jan 29 13:10:02.622: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 13:10:02.750: INFO: namespace proxy-7685 deletion completed in 6.162788501s • [SLOW TEST:28.359 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 29 13:10:02.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0129 13:10:33.451015 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 29 13:10:33.451: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 29 13:10:33.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2386" for this suite. Jan 29 13:10:42.433: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 13:10:43.000: INFO: namespace gc-2386 deletion completed in 9.544202601s • [SLOW TEST:40.248 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 29 13:10:43.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on node default medium Jan 29 13:10:43.177: INFO: Waiting up to 5m0s for pod "pod-c2a37c0f-0299-4b3b-b566-9c13e5d46aab" in namespace "emptydir-3289" to be "success or failure" Jan 29 13:10:43.193: INFO: Pod "pod-c2a37c0f-0299-4b3b-b566-9c13e5d46aab": Phase="Pending", Reason="", readiness=false. Elapsed: 15.879779ms Jan 29 13:10:45.199: INFO: Pod "pod-c2a37c0f-0299-4b3b-b566-9c13e5d46aab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02142715s Jan 29 13:10:47.219: INFO: Pod "pod-c2a37c0f-0299-4b3b-b566-9c13e5d46aab": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042093525s Jan 29 13:10:49.228: INFO: Pod "pod-c2a37c0f-0299-4b3b-b566-9c13e5d46aab": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050588504s Jan 29 13:10:51.236: INFO: Pod "pod-c2a37c0f-0299-4b3b-b566-9c13e5d46aab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.05915467s STEP: Saw pod success Jan 29 13:10:51.237: INFO: Pod "pod-c2a37c0f-0299-4b3b-b566-9c13e5d46aab" satisfied condition "success or failure" Jan 29 13:10:51.241: INFO: Trying to get logs from node iruya-node pod pod-c2a37c0f-0299-4b3b-b566-9c13e5d46aab container test-container: STEP: delete the pod Jan 29 13:10:51.370: INFO: Waiting for pod pod-c2a37c0f-0299-4b3b-b566-9c13e5d46aab to disappear Jan 29 13:10:51.390: INFO: Pod pod-c2a37c0f-0299-4b3b-b566-9c13e5d46aab no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 29 13:10:51.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3289" for this suite. Jan 29 13:10:57.432: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 13:10:57.604: INFO: namespace emptydir-3289 deletion completed in 6.206670579s • [SLOW TEST:14.603 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 29 13:10:57.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 29 13:11:28.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-3033" for this suite. Jan 29 13:11:34.271: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 13:11:34.394: INFO: namespace namespaces-3033 deletion completed in 6.260981581s STEP: Destroying namespace "nsdeletetest-1058" for this suite. Jan 29 13:11:34.396: INFO: Namespace nsdeletetest-1058 was already deleted STEP: Destroying namespace "nsdeletetest-5964" for this suite. Jan 29 13:11:40.420: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 13:11:40.531: INFO: namespace nsdeletetest-5964 deletion completed in 6.134336174s • [SLOW TEST:42.927 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 29 13:11:40.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on tmpfs Jan 29 13:11:40.724: INFO: Waiting up to 5m0s for pod "pod-9950f199-96b4-4776-86b1-097731ecac14" in namespace "emptydir-2422" to be "success or failure" Jan 29 13:11:40.736: INFO: Pod "pod-9950f199-96b4-4776-86b1-097731ecac14": Phase="Pending", Reason="", readiness=false. Elapsed: 11.76522ms Jan 29 13:11:42.748: INFO: Pod "pod-9950f199-96b4-4776-86b1-097731ecac14": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023663273s Jan 29 13:11:44.756: INFO: Pod "pod-9950f199-96b4-4776-86b1-097731ecac14": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031799704s Jan 29 13:11:46.762: INFO: Pod "pod-9950f199-96b4-4776-86b1-097731ecac14": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038372034s Jan 29 13:11:48.781: INFO: Pod "pod-9950f199-96b4-4776-86b1-097731ecac14": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.057179382s STEP: Saw pod success Jan 29 13:11:48.781: INFO: Pod "pod-9950f199-96b4-4776-86b1-097731ecac14" satisfied condition "success or failure" Jan 29 13:11:48.790: INFO: Trying to get logs from node iruya-node pod pod-9950f199-96b4-4776-86b1-097731ecac14 container test-container: STEP: delete the pod Jan 29 13:11:49.009: INFO: Waiting for pod pod-9950f199-96b4-4776-86b1-097731ecac14 to disappear Jan 29 13:11:49.106: INFO: Pod pod-9950f199-96b4-4776-86b1-097731ecac14 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 29 13:11:49.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2422" for this suite. Jan 29 13:11:55.140: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 13:11:55.223: INFO: namespace emptydir-2422 deletion completed in 6.109225902s • [SLOW TEST:14.691 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 29 13:11:55.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-27fe9e75-cc4d-4787-a63c-40ac56fa9d64 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-27fe9e75-cc4d-4787-a63c-40ac56fa9d64 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 29 13:12:05.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2095" for this suite. Jan 29 13:12:27.592: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 13:12:27.696: INFO: namespace configmap-2095 deletion completed in 22.131690167s • [SLOW TEST:32.473 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 29 13:12:27.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 29 13:12:27.907: INFO: Waiting up to 5m0s for pod "downwardapi-volume-03debb94-e2c3-48b0-9dfa-e78956a8add1" in namespace "downward-api-8371" to be "success or failure" Jan 29 13:12:27.916: INFO: Pod "downwardapi-volume-03debb94-e2c3-48b0-9dfa-e78956a8add1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.561181ms Jan 29 13:12:29.925: INFO: Pod "downwardapi-volume-03debb94-e2c3-48b0-9dfa-e78956a8add1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017715573s Jan 29 13:12:31.962: INFO: Pod "downwardapi-volume-03debb94-e2c3-48b0-9dfa-e78956a8add1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054062542s Jan 29 13:12:33.975: INFO: Pod "downwardapi-volume-03debb94-e2c3-48b0-9dfa-e78956a8add1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.067521058s Jan 29 13:12:35.987: INFO: Pod "downwardapi-volume-03debb94-e2c3-48b0-9dfa-e78956a8add1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.079849873s STEP: Saw pod success Jan 29 13:12:35.987: INFO: Pod "downwardapi-volume-03debb94-e2c3-48b0-9dfa-e78956a8add1" satisfied condition "success or failure" Jan 29 13:12:35.992: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-03debb94-e2c3-48b0-9dfa-e78956a8add1 container client-container: STEP: delete the pod Jan 29 13:12:36.056: INFO: Waiting for pod downwardapi-volume-03debb94-e2c3-48b0-9dfa-e78956a8add1 to disappear Jan 29 13:12:36.200: INFO: Pod downwardapi-volume-03debb94-e2c3-48b0-9dfa-e78956a8add1 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 29 13:12:36.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8371" for this suite. Jan 29 13:12:44.268: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 13:12:44.412: INFO: namespace downward-api-8371 deletion completed in 8.19811822s • [SLOW TEST:16.716 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 29 13:12:44.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Jan 29 13:12:53.644: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 29 13:12:54.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-5254" for this suite. Jan 29 13:13:34.727: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 13:13:34.980: INFO: namespace replicaset-5254 deletion completed in 40.279048762s • [SLOW TEST:50.568 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 29 13:13:34.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 29 13:13:35.134: INFO: Waiting up to 5m0s for pod "downwardapi-volume-415d6463-8a3e-40d8-9d4a-9001a4de3f20" in namespace "downward-api-9134" to be "success or failure" Jan 29 13:13:35.147: INFO: Pod "downwardapi-volume-415d6463-8a3e-40d8-9d4a-9001a4de3f20": Phase="Pending", Reason="", readiness=false. Elapsed: 13.016724ms Jan 29 13:13:37.159: INFO: Pod "downwardapi-volume-415d6463-8a3e-40d8-9d4a-9001a4de3f20": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025088001s Jan 29 13:13:39.170: INFO: Pod "downwardapi-volume-415d6463-8a3e-40d8-9d4a-9001a4de3f20": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035723292s Jan 29 13:13:41.179: INFO: Pod "downwardapi-volume-415d6463-8a3e-40d8-9d4a-9001a4de3f20": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045084095s Jan 29 13:13:43.190: INFO: Pod "downwardapi-volume-415d6463-8a3e-40d8-9d4a-9001a4de3f20": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.055857237s STEP: Saw pod success Jan 29 13:13:43.190: INFO: Pod "downwardapi-volume-415d6463-8a3e-40d8-9d4a-9001a4de3f20" satisfied condition "success or failure" Jan 29 13:13:43.192: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-415d6463-8a3e-40d8-9d4a-9001a4de3f20 container client-container: STEP: delete the pod Jan 29 13:13:43.264: INFO: Waiting for pod downwardapi-volume-415d6463-8a3e-40d8-9d4a-9001a4de3f20 to disappear Jan 29 13:13:43.316: INFO: Pod downwardapi-volume-415d6463-8a3e-40d8-9d4a-9001a4de3f20 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 29 13:13:43.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9134" for this suite. Jan 29 13:13:49.351: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 13:13:49.483: INFO: namespace downward-api-9134 deletion completed in 6.156835069s • [SLOW TEST:14.502 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 29 13:13:49.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Jan 29 13:13:49.616: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 29 13:14:08.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5097" for this suite. Jan 29 13:14:32.099: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 13:14:32.211: INFO: namespace init-container-5097 deletion completed in 24.140550086s • [SLOW TEST:42.728 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 29 13:14:32.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 29 13:14:32.328: INFO: Creating deployment "nginx-deployment" Jan 29 13:14:32.335: INFO: Waiting for observed generation 1 Jan 29 13:14:34.556: INFO: Waiting for all required pods to come up Jan 29 13:14:35.308: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Jan 29 13:14:57.494: INFO: Waiting for deployment "nginx-deployment" to complete Jan 29 13:14:57.503: INFO: Updating deployment "nginx-deployment" with a non-existent image Jan 29 13:14:57.515: INFO: Updating deployment nginx-deployment Jan 29 13:14:57.516: INFO: Waiting for observed generation 2 Jan 29 13:15:00.538: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Jan 29 13:15:02.646: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Jan 29 13:15:03.441: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Jan 29 13:15:06.031: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Jan 29 13:15:06.031: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Jan 29 13:15:06.037: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Jan 29 13:15:06.395: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Jan 29 13:15:06.395: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Jan 29 13:15:06.408: INFO: Updating deployment nginx-deployment Jan 29 13:15:06.408: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Jan 29 13:15:07.243: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Jan 29 13:15:11.102: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jan 29 13:15:14.384: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-2760,SelfLink:/apis/apps/v1/namespaces/deployment-2760/deployments/nginx-deployment,UID:7105c01d-0967-41fd-8dbb-82a31ff20c1f,ResourceVersion:22313695,Generation:3,CreationTimestamp:2020-01-29 13:14:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2020-01-29 13:15:06 +0000 UTC 2020-01-29 13:15:06 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-01-29 13:15:11 +0000 UTC 2020-01-29 13:14:32 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},} Jan 29 13:15:17.570: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-2760,SelfLink:/apis/apps/v1/namespaces/deployment-2760/replicasets/nginx-deployment-55fb7cb77f,UID:bb1ca527-a0aa-42e8-94aa-68990e119d0d,ResourceVersion:22313689,Generation:3,CreationTimestamp:2020-01-29 13:14:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 7105c01d-0967-41fd-8dbb-82a31ff20c1f 0xc0010a26f7 0xc0010a26f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 29 13:15:17.570: INFO: All old ReplicaSets of Deployment "nginx-deployment": Jan 29 13:15:17.571: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-2760,SelfLink:/apis/apps/v1/namespaces/deployment-2760/replicasets/nginx-deployment-7b8c6f4498,UID:ae93a9c0-68ed-4b52-8e95-be92328d93eb,ResourceVersion:22313690,Generation:3,CreationTimestamp:2020-01-29 13:14:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 7105c01d-0967-41fd-8dbb-82a31ff20c1f 0xc0010a27c7 0xc0010a27c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Jan 29 13:15:19.504: INFO: Pod "nginx-deployment-55fb7cb77f-49tb7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-49tb7,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2760,SelfLink:/api/v1/namespaces/deployment-2760/pods/nginx-deployment-55fb7cb77f-49tb7,UID:0cfbe54f-9ed3-4d6a-b0f6-1f5c8c018e34,ResourceVersion:22313694,Generation:0,CreationTimestamp:2020-01-29 13:15:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f bb1ca527-a0aa-42e8-94aa-68990e119d0d 0xc0028167c7 0xc0028167c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kfwx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kfwx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kfwx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002816840} {node.kubernetes.io/unreachable Exists NoExecute 0xc002816860}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:15:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:15:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:15:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:15:07 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-29 13:15:07 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 29 13:15:19.505: INFO: Pod "nginx-deployment-55fb7cb77f-4mgbr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-4mgbr,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2760,SelfLink:/api/v1/namespaces/deployment-2760/pods/nginx-deployment-55fb7cb77f-4mgbr,UID:990cb3e3-8342-4783-bf21-4854b2ba2234,ResourceVersion:22313707,Generation:0,CreationTimestamp:2020-01-29 13:15:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f bb1ca527-a0aa-42e8-94aa-68990e119d0d 0xc002816937 0xc002816938}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kfwx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kfwx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kfwx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0028169b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0028169d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:15:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:15:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:15:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:15:07 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-29 13:15:07 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 29 13:15:19.505: INFO: Pod "nginx-deployment-55fb7cb77f-4mpgx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-4mpgx,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2760,SelfLink:/api/v1/namespaces/deployment-2760/pods/nginx-deployment-55fb7cb77f-4mpgx,UID:f3515bae-95f9-49a7-9a78-8e57327e71c3,ResourceVersion:22313721,Generation:0,CreationTimestamp:2020-01-29 13:15:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f bb1ca527-a0aa-42e8-94aa-68990e119d0d 0xc002816aa7 0xc002816aa8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kfwx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kfwx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kfwx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002816b20} {node.kubernetes.io/unreachable Exists NoExecute 0xc002816b40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:15:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:15:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:15:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:15:07 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-29 13:15:07 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 29 13:15:19.505: INFO: Pod "nginx-deployment-55fb7cb77f-58r52" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-58r52,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2760,SelfLink:/api/v1/namespaces/deployment-2760/pods/nginx-deployment-55fb7cb77f-58r52,UID:da0f4828-129d-43cb-8b84-efca0cbb2552,ResourceVersion:22313713,Generation:0,CreationTimestamp:2020-01-29 13:15:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f bb1ca527-a0aa-42e8-94aa-68990e119d0d 0xc002816c17 0xc002816c18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kfwx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kfwx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kfwx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002816c80} {node.kubernetes.io/unreachable Exists NoExecute 0xc002816ca0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:15:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:15:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:15:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:15:07 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-29 13:15:09 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 29 13:15:19.506: INFO: Pod "nginx-deployment-55fb7cb77f-8h6ll" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-8h6ll,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2760,SelfLink:/api/v1/namespaces/deployment-2760/pods/nginx-deployment-55fb7cb77f-8h6ll,UID:3b309f13-56b3-4172-bf5e-4f60dc4dd8a4,ResourceVersion:22313665,Generation:0,CreationTimestamp:2020-01-29 13:15:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f bb1ca527-a0aa-42e8-94aa-68990e119d0d 0xc002816d97 0xc002816d98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kfwx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kfwx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kfwx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002816e00} {node.kubernetes.io/unreachable Exists NoExecute 0xc002816e20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:15:07 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 29 13:15:19.506: INFO: Pod "nginx-deployment-55fb7cb77f-dm972" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-dm972,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2760,SelfLink:/api/v1/namespaces/deployment-2760/pods/nginx-deployment-55fb7cb77f-dm972,UID:2f59d233-35e0-4e70-afb5-2f2c7f138d5e,ResourceVersion:22313678,Generation:0,CreationTimestamp:2020-01-29 13:15:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f bb1ca527-a0aa-42e8-94aa-68990e119d0d 0xc002816ea7 0xc002816ea8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kfwx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kfwx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kfwx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002816f20} {node.kubernetes.io/unreachable Exists NoExecute 0xc002816f40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:15:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:15:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:15:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:15:07 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-29 13:15:07 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 29 13:15:19.507: INFO: Pod "nginx-deployment-55fb7cb77f-fbqvf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-fbqvf,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2760,SelfLink:/api/v1/namespaces/deployment-2760/pods/nginx-deployment-55fb7cb77f-fbqvf,UID:2ebc7efd-e377-4bfa-8c17-33f4158cece7,ResourceVersion:22313595,Generation:0,CreationTimestamp:2020-01-29 13:14:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f bb1ca527-a0aa-42e8-94aa-68990e119d0d 0xc002817017 0xc002817018}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kfwx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kfwx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kfwx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002817080} {node.kubernetes.io/unreachable Exists NoExecute 0xc0028170a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:14:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:14:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:14:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:14:57 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-29 13:14:57 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 29 13:15:19.507: INFO: Pod "nginx-deployment-55fb7cb77f-gb77k" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-gb77k,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2760,SelfLink:/api/v1/namespaces/deployment-2760/pods/nginx-deployment-55fb7cb77f-gb77k,UID:155da898-eebc-475f-8b51-999dcc4a2df8,ResourceVersion:22313586,Generation:0,CreationTimestamp:2020-01-29 13:14:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f bb1ca527-a0aa-42e8-94aa-68990e119d0d 0xc002817177 0xc002817178}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kfwx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kfwx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kfwx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0028171f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002817210}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:14:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:14:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:14:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:14:57 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-29 13:14:57 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 29 13:15:19.508: INFO: Pod "nginx-deployment-55fb7cb77f-j4jgt" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-j4jgt,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2760,SelfLink:/api/v1/namespaces/deployment-2760/pods/nginx-deployment-55fb7cb77f-j4jgt,UID:9b0dc2aa-0a1e-4407-95d9-c47dfd5b3f5a,ResourceVersion:22313622,Generation:0,CreationTimestamp:2020-01-29 13:14:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f bb1ca527-a0aa-42e8-94aa-68990e119d0d 0xc0028172e7 0xc0028172e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kfwx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kfwx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kfwx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002817360} {node.kubernetes.io/unreachable Exists NoExecute 0xc002817380}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:15:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:15:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:15:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:15:00 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-29 13:15:00 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 29 13:15:19.508: INFO: Pod "nginx-deployment-55fb7cb77f-l9kvp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-l9kvp,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2760,SelfLink:/api/v1/namespaces/deployment-2760/pods/nginx-deployment-55fb7cb77f-l9kvp,UID:d454c1ea-fbc8-49da-96e5-e275105b7da0,ResourceVersion:22313602,Generation:0,CreationTimestamp:2020-01-29 13:14:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f bb1ca527-a0aa-42e8-94aa-68990e119d0d 0xc002817457 0xc002817458}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kfwx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kfwx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kfwx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0028174d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0028174f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:14:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:14:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:14:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:14:57 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-29 13:14:57 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 29 13:15:19.509: INFO: Pod "nginx-deployment-55fb7cb77f-lbsjb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-lbsjb,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2760,SelfLink:/api/v1/namespaces/deployment-2760/pods/nginx-deployment-55fb7cb77f-lbsjb,UID:1c8513ea-67bb-4741-8043-3c4c7f3c9f03,ResourceVersion:22313623,Generation:0,CreationTimestamp:2020-01-29 13:14:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f bb1ca527-a0aa-42e8-94aa-68990e119d0d 0xc0028175c7 0xc0028175c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kfwx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kfwx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kfwx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002817630} {node.kubernetes.io/unreachable Exists NoExecute 0xc002817650}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:15:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:15:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:15:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:14:59 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-29 13:15:00 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 29 13:15:19.510: INFO: Pod "nginx-deployment-55fb7cb77f-lkbvl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-lkbvl,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2760,SelfLink:/api/v1/namespaces/deployment-2760/pods/nginx-deployment-55fb7cb77f-lkbvl,UID:f42e4f89-57a4-4edc-86ae-06237f4c125c,ResourceVersion:22313663,Generation:0,CreationTimestamp:2020-01-29 13:15:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f bb1ca527-a0aa-42e8-94aa-68990e119d0d 0xc002817727 0xc002817728}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kfwx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kfwx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kfwx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0028177a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0028177c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:15:07 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 29 13:15:19.510: INFO: Pod "nginx-deployment-55fb7cb77f-v75sh" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-v75sh,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2760,SelfLink:/api/v1/namespaces/deployment-2760/pods/nginx-deployment-55fb7cb77f-v75sh,UID:4c6fe0a5-aa5e-474c-ab68-316cbacaab5f,ResourceVersion:22313674,Generation:0,CreationTimestamp:2020-01-29 13:15:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f bb1ca527-a0aa-42e8-94aa-68990e119d0d 0xc002817847 0xc002817848}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kfwx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kfwx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-kfwx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0028178c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0028178e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:15:07 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 29 13:15:19.511: INFO: Pod "nginx-deployment-7b8c6f4498-2wfj9" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-2wfj9,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2760,SelfLink:/api/v1/namespaces/deployment-2760/pods/nginx-deployment-7b8c6f4498-2wfj9,UID:f5005cc6-fe78-4c50-a395-634a5fe5b1a2,ResourceVersion:22313523,Generation:0,CreationTimestamp:2020-01-29 13:14:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ae93a9c0-68ed-4b52-8e95-be92328d93eb 0xc002817967 0xc002817968}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kfwx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kfwx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kfwx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0028179e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002817a00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:14:32 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:14:53 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:14:53 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:14:32 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.4,StartTime:2020-01-29 13:14:32 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-29 13:14:52 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://6d67332be3c5486c4d3c26a4226d835398b5c4c1d84a5906b7739140d6b53e96}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 29 13:15:19.512: INFO: Pod "nginx-deployment-7b8c6f4498-4qgjd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-4qgjd,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2760,SelfLink:/api/v1/namespaces/deployment-2760/pods/nginx-deployment-7b8c6f4498-4qgjd,UID:644e0f62-2052-480d-9ec2-2e17ed7324a0,ResourceVersion:22313684,Generation:0,CreationTimestamp:2020-01-29 13:15:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ae93a9c0-68ed-4b52-8e95-be92328d93eb 0xc002817ad7 0xc002817ad8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kfwx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kfwx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kfwx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002817b40} {node.kubernetes.io/unreachable Exists NoExecute 0xc002817b60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:15:08 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 29 13:15:19.512: INFO: Pod "nginx-deployment-7b8c6f4498-9nv5v" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-9nv5v,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2760,SelfLink:/api/v1/namespaces/deployment-2760/pods/nginx-deployment-7b8c6f4498-9nv5v,UID:21c38565-53d3-41be-8a21-6c565b4190db,ResourceVersion:22313664,Generation:0,CreationTimestamp:2020-01-29 13:15:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ae93a9c0-68ed-4b52-8e95-be92328d93eb 0xc002817be7 0xc002817be8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kfwx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kfwx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kfwx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002817c60} {node.kubernetes.io/unreachable Exists NoExecute 0xc002817c80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:15:07 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 29 13:15:19.513: INFO: Pod "nginx-deployment-7b8c6f4498-cdls2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-cdls2,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2760,SelfLink:/api/v1/namespaces/deployment-2760/pods/nginx-deployment-7b8c6f4498-cdls2,UID:c3a85ca9-2050-4b72-bd20-eec18630f5a5,ResourceVersion:22313656,Generation:0,CreationTimestamp:2020-01-29 13:15:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ae93a9c0-68ed-4b52-8e95-be92328d93eb 0xc002817d07 0xc002817d08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kfwx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kfwx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kfwx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002817d70} {node.kubernetes.io/unreachable Exists NoExecute 0xc002817d90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:15:07 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 29 13:15:19.513: INFO: Pod "nginx-deployment-7b8c6f4498-f9tgp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-f9tgp,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2760,SelfLink:/api/v1/namespaces/deployment-2760/pods/nginx-deployment-7b8c6f4498-f9tgp,UID:459a501b-750c-4790-80ba-8a5119e55c32,ResourceVersion:22313687,Generation:0,CreationTimestamp:2020-01-29 13:15:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ae93a9c0-68ed-4b52-8e95-be92328d93eb 0xc002817e17 0xc002817e18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kfwx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kfwx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kfwx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002817e90} {node.kubernetes.io/unreachable Exists NoExecute 0xc002817eb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:15:08 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 29 13:15:19.513: INFO: Pod "nginx-deployment-7b8c6f4498-g29kn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-g29kn,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2760,SelfLink:/api/v1/namespaces/deployment-2760/pods/nginx-deployment-7b8c6f4498-g29kn,UID:bf0f1246-32d5-4de5-9c96-f34a666b1cea,ResourceVersion:22313672,Generation:0,CreationTimestamp:2020-01-29 13:15:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ae93a9c0-68ed-4b52-8e95-be92328d93eb 0xc002817f57 0xc002817f58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kfwx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kfwx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kfwx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002817fc0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002817fe0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:15:07 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 29 13:15:19.514: INFO: Pod "nginx-deployment-7b8c6f4498-g5jjj" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-g5jjj,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2760,SelfLink:/api/v1/namespaces/deployment-2760/pods/nginx-deployment-7b8c6f4498-g5jjj,UID:801f30ba-9b5c-4bd6-8a15-61f69830aaab,ResourceVersion:22313537,Generation:0,CreationTimestamp:2020-01-29 13:14:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ae93a9c0-68ed-4b52-8e95-be92328d93eb 0xc00264c067 0xc00264c068}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kfwx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kfwx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kfwx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00264c0e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00264c100}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:14:32 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:14:54 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:14:54 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:14:32 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-01-29 13:14:32 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-29 13:14:51 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://9f5861e6f7aaf16d2d7001b35968665ee87e674c43f2db23153e97d14f1bcef2}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 29 13:15:19.514: INFO: Pod "nginx-deployment-7b8c6f4498-gjpbc" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-gjpbc,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2760,SelfLink:/api/v1/namespaces/deployment-2760/pods/nginx-deployment-7b8c6f4498-gjpbc,UID:4a639565-5090-43d2-847c-a815875ed6a3,ResourceVersion:22313561,Generation:0,CreationTimestamp:2020-01-29 13:14:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ae93a9c0-68ed-4b52-8e95-be92328d93eb 0xc00264c1d7 0xc00264c1d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kfwx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kfwx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kfwx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00264c240} {node.kubernetes.io/unreachable Exists NoExecute 0xc00264c260}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:14:32 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:14:56 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:14:56 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:14:32 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.8,StartTime:2020-01-29 13:14:32 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-29 13:14:55 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://b2b5de697f03ac063b139780603e027187bda144d493ccbc4086c501863a75cd}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 29 13:15:19.514: INFO: Pod "nginx-deployment-7b8c6f4498-gpgpb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-gpgpb,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2760,SelfLink:/api/v1/namespaces/deployment-2760/pods/nginx-deployment-7b8c6f4498-gpgpb,UID:db2bfd8e-2772-4f23-8aa5-21208109fb7d,ResourceVersion:22313686,Generation:0,CreationTimestamp:2020-01-29 13:15:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ae93a9c0-68ed-4b52-8e95-be92328d93eb 0xc00264c337 0xc00264c338}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kfwx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kfwx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kfwx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00264c3b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00264c3d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:15:08 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 29 13:15:19.515: INFO: Pod "nginx-deployment-7b8c6f4498-nhkx7" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-nhkx7,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2760,SelfLink:/api/v1/namespaces/deployment-2760/pods/nginx-deployment-7b8c6f4498-nhkx7,UID:b2021a2b-805b-4b9b-8aed-cbc741ec605b,ResourceVersion:22313528,Generation:0,CreationTimestamp:2020-01-29 13:14:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ae93a9c0-68ed-4b52-8e95-be92328d93eb 0xc00264c457 0xc00264c458}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kfwx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kfwx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kfwx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00264c4e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00264c500}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:14:32 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:14:53 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:14:53 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:14:32 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.3,StartTime:2020-01-29 13:14:32 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-29 13:14:52 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://429501c8ebc80ac12f5d3757dabf7174ba9f0977807378991a5cff5ef80a2c31}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 29 13:15:19.515: INFO: Pod "nginx-deployment-7b8c6f4498-q47k2" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-q47k2,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2760,SelfLink:/api/v1/namespaces/deployment-2760/pods/nginx-deployment-7b8c6f4498-q47k2,UID:8cfcb220-1135-436b-97cc-544f95477d10,ResourceVersion:22313555,Generation:0,CreationTimestamp:2020-01-29 13:14:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ae93a9c0-68ed-4b52-8e95-be92328d93eb 0xc00264c5d7 0xc00264c5d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kfwx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kfwx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kfwx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00264c650} {node.kubernetes.io/unreachable Exists NoExecute 0xc00264c670}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:14:32 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:14:56 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:14:56 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:14:32 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.7,StartTime:2020-01-29 13:14:32 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-29 13:14:55 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://e7cbb458865c0e3287536853162ca8349e9edf5e54ee51000f6cc102a9c5e94e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 29 13:15:19.515: INFO: Pod "nginx-deployment-7b8c6f4498-qzzcr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-qzzcr,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2760,SelfLink:/api/v1/namespaces/deployment-2760/pods/nginx-deployment-7b8c6f4498-qzzcr,UID:a4690e43-65fa-4814-bdaa-dc897d1b0eba,ResourceVersion:22313702,Generation:0,CreationTimestamp:2020-01-29 13:15:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ae93a9c0-68ed-4b52-8e95-be92328d93eb 0xc00264c747 0xc00264c748}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kfwx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kfwx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kfwx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00264c7b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00264c7d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:15:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:15:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:15:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:15:07 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-29 13:15:07 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 29 13:15:19.516: INFO: Pod "nginx-deployment-7b8c6f4498-sln9s" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-sln9s,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2760,SelfLink:/api/v1/namespaces/deployment-2760/pods/nginx-deployment-7b8c6f4498-sln9s,UID:65cd1837-f7d2-4d8e-8a1e-87a295eaf85a,ResourceVersion:22313673,Generation:0,CreationTimestamp:2020-01-29 13:15:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ae93a9c0-68ed-4b52-8e95-be92328d93eb 0xc00264c897 0xc00264c898}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kfwx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kfwx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kfwx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00264c910} {node.kubernetes.io/unreachable Exists NoExecute 0xc00264c930}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:15:07 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 29 13:15:19.516: INFO: Pod "nginx-deployment-7b8c6f4498-spjms" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-spjms,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2760,SelfLink:/api/v1/namespaces/deployment-2760/pods/nginx-deployment-7b8c6f4498-spjms,UID:5fdba23d-eab4-4d60-85cf-dc91546ed5a7,ResourceVersion:22313685,Generation:0,CreationTimestamp:2020-01-29 13:15:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ae93a9c0-68ed-4b52-8e95-be92328d93eb 0xc00264c9b7 0xc00264c9b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kfwx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kfwx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kfwx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00264ca30} {node.kubernetes.io/unreachable Exists NoExecute 0xc00264ca50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:15:08 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 29 13:15:19.516: INFO: Pod "nginx-deployment-7b8c6f4498-tkpkz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-tkpkz,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2760,SelfLink:/api/v1/namespaces/deployment-2760/pods/nginx-deployment-7b8c6f4498-tkpkz,UID:0510e570-39b3-4a8a-baaf-d663b97aa314,ResourceVersion:22313667,Generation:0,CreationTimestamp:2020-01-29 13:15:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ae93a9c0-68ed-4b52-8e95-be92328d93eb 0xc00264cad7 0xc00264cad8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kfwx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kfwx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kfwx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00264cb50} {node.kubernetes.io/unreachable Exists NoExecute 0xc00264cb70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:15:07 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 29 13:15:19.517: INFO: Pod "nginx-deployment-7b8c6f4498-v5wkt" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-v5wkt,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2760,SelfLink:/api/v1/namespaces/deployment-2760/pods/nginx-deployment-7b8c6f4498-v5wkt,UID:530f715b-b38d-4833-bfab-d826fdb6cbb2,ResourceVersion:22313540,Generation:0,CreationTimestamp:2020-01-29 13:14:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ae93a9c0-68ed-4b52-8e95-be92328d93eb 0xc00264cbf7 0xc00264cbf8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kfwx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kfwx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kfwx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00264cc70} {node.kubernetes.io/unreachable Exists NoExecute 0xc00264cc90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:14:32 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:14:54 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:14:54 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:14:32 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-01-29 13:14:32 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-29 13:14:51 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://449f000e3d8a8b9bca7ca993a04153956af5e11e637298e6be219972e42d979a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 29 13:15:19.517: INFO: Pod "nginx-deployment-7b8c6f4498-vrr92" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-vrr92,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2760,SelfLink:/api/v1/namespaces/deployment-2760/pods/nginx-deployment-7b8c6f4498-vrr92,UID:f31d1658-8cfe-4c2e-9bc7-1476c8d08a1b,ResourceVersion:22313549,Generation:0,CreationTimestamp:2020-01-29 13:14:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ae93a9c0-68ed-4b52-8e95-be92328d93eb 0xc00264cd67 0xc00264cd68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kfwx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kfwx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kfwx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00264cdd0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00264cdf0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:14:32 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:14:56 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:14:56 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:14:32 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.4,StartTime:2020-01-29 13:14:32 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-29 13:14:52 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://3e6dafd1f5a13133e0e9139ed8e64b005fa71b6ccecb1184318620ed354aa8a9}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 29 13:15:19.518: INFO: Pod "nginx-deployment-7b8c6f4498-vzwvd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-vzwvd,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2760,SelfLink:/api/v1/namespaces/deployment-2760/pods/nginx-deployment-7b8c6f4498-vzwvd,UID:d56f953a-1a94-49fe-a172-3315e49f635d,ResourceVersion:22313681,Generation:0,CreationTimestamp:2020-01-29 13:15:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ae93a9c0-68ed-4b52-8e95-be92328d93eb 0xc00264cec7 0xc00264cec8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kfwx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kfwx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kfwx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00264cf30} {node.kubernetes.io/unreachable Exists NoExecute 0xc00264cf50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:15:08 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 29 13:15:19.518: INFO: Pod "nginx-deployment-7b8c6f4498-x6bh5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-x6bh5,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2760,SelfLink:/api/v1/namespaces/deployment-2760/pods/nginx-deployment-7b8c6f4498-x6bh5,UID:c1538a0b-7d57-4068-bf25-ae74929acddf,ResourceVersion:22313675,Generation:0,CreationTimestamp:2020-01-29 13:15:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ae93a9c0-68ed-4b52-8e95-be92328d93eb 0xc00264cfd7 0xc00264cfd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kfwx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kfwx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kfwx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00264d040} {node.kubernetes.io/unreachable Exists NoExecute 0xc00264d060}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:15:07 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 29 13:15:19.518: INFO: Pod "nginx-deployment-7b8c6f4498-xmsrq" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-xmsrq,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2760,SelfLink:/api/v1/namespaces/deployment-2760/pods/nginx-deployment-7b8c6f4498-xmsrq,UID:d3ee2cda-1fb0-431c-a934-919e86d28655,ResourceVersion:22313533,Generation:0,CreationTimestamp:2020-01-29 13:14:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ae93a9c0-68ed-4b52-8e95-be92328d93eb 0xc00264d0e7 0xc00264d0e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kfwx2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kfwx2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kfwx2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00264d160} {node.kubernetes.io/unreachable Exists NoExecute 0xc00264d180}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:14:32 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:14:54 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:14:54 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:14:32 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.5,StartTime:2020-01-29 13:14:32 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-29 13:14:53 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://9f36c6feb7fa8c5592c23c3f67f69b465f7b5a23e159a2d973bb55c12a4fd877}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 29 13:15:19.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2760" for this suite. Jan 29 13:16:36.789: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 13:16:36.971: INFO: namespace deployment-2760 deletion completed in 1m17.157826698s • [SLOW TEST:124.760 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 29 13:16:36.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0129 13:16:40.513491 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 29 13:16:40.513: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 29 13:16:40.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3777" for this suite. Jan 29 13:16:48.956: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 13:16:49.027: INFO: namespace gc-3777 deletion completed in 8.503785835s • [SLOW TEST:12.054 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 29 13:16:49.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 29 13:16:49.385: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Jan 29 13:16:49.445: INFO: Number of nodes with available pods: 0 Jan 29 13:16:49.445: INFO: Node iruya-node is running more than one daemon pod Jan 29 13:16:51.061: INFO: Number of nodes with available pods: 0 Jan 29 13:16:51.061: INFO: Node iruya-node is running more than one daemon pod Jan 29 13:16:51.645: INFO: Number of nodes with available pods: 0 Jan 29 13:16:51.645: INFO: Node iruya-node is running more than one daemon pod Jan 29 13:16:53.146: INFO: Number of nodes with available pods: 0 Jan 29 13:16:53.146: INFO: Node iruya-node is running more than one daemon pod Jan 29 13:16:53.462: INFO: Number of nodes with available pods: 0 Jan 29 13:16:53.462: INFO: Node iruya-node is running more than one daemon pod Jan 29 13:16:54.471: INFO: Number of nodes with available pods: 0 Jan 29 13:16:54.471: INFO: Node iruya-node is running more than one daemon pod Jan 29 13:16:55.471: INFO: Number of nodes with available pods: 0 Jan 29 13:16:55.471: INFO: Node iruya-node is running more than one daemon pod Jan 29 13:16:56.800: INFO: Number of nodes with available pods: 0 Jan 29 13:16:56.800: INFO: Node iruya-node is running more than one daemon pod Jan 29 13:16:57.459: INFO: Number of nodes with available pods: 0 Jan 29 13:16:57.459: INFO: Node iruya-node is running more than one daemon pod Jan 29 13:16:58.468: INFO: Number of nodes with available pods: 0 Jan 29 13:16:58.468: INFO: Node iruya-node is running more than one daemon pod Jan 29 13:16:59.468: INFO: Number of nodes with available pods: 0 Jan 29 13:16:59.468: INFO: Node iruya-node is running more than one daemon pod Jan 29 13:17:00.479: INFO: Number of nodes with available pods: 1 Jan 29 13:17:00.479: INFO: Node iruya-node is running more than one daemon pod Jan 29 13:17:01.462: INFO: Number of nodes with available pods: 2 Jan 29 13:17:01.462: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Jan 29 13:17:01.504: INFO: Wrong image for pod: daemon-set-8h4kw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 29 13:17:01.504: INFO: Wrong image for pod: daemon-set-x4dkw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 29 13:17:02.562: INFO: Wrong image for pod: daemon-set-8h4kw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 29 13:17:02.562: INFO: Wrong image for pod: daemon-set-x4dkw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 29 13:17:03.557: INFO: Wrong image for pod: daemon-set-8h4kw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 29 13:17:03.557: INFO: Wrong image for pod: daemon-set-x4dkw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 29 13:17:04.561: INFO: Wrong image for pod: daemon-set-8h4kw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 29 13:17:04.561: INFO: Wrong image for pod: daemon-set-x4dkw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 29 13:17:05.560: INFO: Wrong image for pod: daemon-set-8h4kw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 29 13:17:05.561: INFO: Wrong image for pod: daemon-set-x4dkw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 29 13:17:06.562: INFO: Wrong image for pod: daemon-set-8h4kw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 29 13:17:06.562: INFO: Wrong image for pod: daemon-set-x4dkw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 29 13:17:07.559: INFO: Wrong image for pod: daemon-set-8h4kw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 29 13:17:07.559: INFO: Wrong image for pod: daemon-set-x4dkw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 29 13:17:08.606: INFO: Wrong image for pod: daemon-set-8h4kw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 29 13:17:08.606: INFO: Pod daemon-set-8h4kw is not available Jan 29 13:17:08.607: INFO: Wrong image for pod: daemon-set-x4dkw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 29 13:17:09.603: INFO: Wrong image for pod: daemon-set-8h4kw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 29 13:17:09.603: INFO: Pod daemon-set-8h4kw is not available Jan 29 13:17:09.603: INFO: Wrong image for pod: daemon-set-x4dkw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 29 13:17:10.561: INFO: Pod daemon-set-q55dk is not available Jan 29 13:17:10.561: INFO: Wrong image for pod: daemon-set-x4dkw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 29 13:17:11.560: INFO: Pod daemon-set-q55dk is not available Jan 29 13:17:11.560: INFO: Wrong image for pod: daemon-set-x4dkw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 29 13:17:12.573: INFO: Pod daemon-set-q55dk is not available Jan 29 13:17:12.573: INFO: Wrong image for pod: daemon-set-x4dkw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 29 13:17:13.561: INFO: Pod daemon-set-q55dk is not available Jan 29 13:17:13.561: INFO: Wrong image for pod: daemon-set-x4dkw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 29 13:17:14.568: INFO: Pod daemon-set-q55dk is not available Jan 29 13:17:14.568: INFO: Wrong image for pod: daemon-set-x4dkw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 29 13:17:15.557: INFO: Pod daemon-set-q55dk is not available Jan 29 13:17:15.557: INFO: Wrong image for pod: daemon-set-x4dkw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 29 13:17:16.561: INFO: Pod daemon-set-q55dk is not available Jan 29 13:17:16.562: INFO: Wrong image for pod: daemon-set-x4dkw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 29 13:17:18.088: INFO: Wrong image for pod: daemon-set-x4dkw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 29 13:17:18.566: INFO: Wrong image for pod: daemon-set-x4dkw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 29 13:17:19.857: INFO: Wrong image for pod: daemon-set-x4dkw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 29 13:17:20.572: INFO: Wrong image for pod: daemon-set-x4dkw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 29 13:17:21.569: INFO: Wrong image for pod: daemon-set-x4dkw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 29 13:17:22.560: INFO: Wrong image for pod: daemon-set-x4dkw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 29 13:17:22.560: INFO: Pod daemon-set-x4dkw is not available Jan 29 13:17:23.553: INFO: Wrong image for pod: daemon-set-x4dkw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 29 13:17:23.553: INFO: Pod daemon-set-x4dkw is not available Jan 29 13:17:24.558: INFO: Wrong image for pod: daemon-set-x4dkw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 29 13:17:24.558: INFO: Pod daemon-set-x4dkw is not available Jan 29 13:17:25.612: INFO: Wrong image for pod: daemon-set-x4dkw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 29 13:17:25.613: INFO: Pod daemon-set-x4dkw is not available Jan 29 13:17:26.560: INFO: Wrong image for pod: daemon-set-x4dkw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 29 13:17:26.560: INFO: Pod daemon-set-x4dkw is not available Jan 29 13:17:27.622: INFO: Wrong image for pod: daemon-set-x4dkw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 29 13:17:27.622: INFO: Pod daemon-set-x4dkw is not available Jan 29 13:17:28.560: INFO: Pod daemon-set-mwpmc is not available STEP: Check that daemon pods are still running on every node of the cluster. Jan 29 13:17:28.593: INFO: Number of nodes with available pods: 1 Jan 29 13:17:28.593: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 29 13:17:29.807: INFO: Number of nodes with available pods: 1 Jan 29 13:17:29.807: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 29 13:17:30.618: INFO: Number of nodes with available pods: 1 Jan 29 13:17:30.619: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 29 13:17:31.663: INFO: Number of nodes with available pods: 1 Jan 29 13:17:31.663: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 29 13:17:32.688: INFO: Number of nodes with available pods: 1 Jan 29 13:17:32.688: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 29 13:17:33.840: INFO: Number of nodes with available pods: 1 Jan 29 13:17:33.840: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 29 13:17:34.652: INFO: Number of nodes with available pods: 1 Jan 29 13:17:34.653: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 29 13:17:35.622: INFO: Number of nodes with available pods: 2 Jan 29 13:17:35.622: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8113, will wait for the garbage collector to delete the pods Jan 29 13:17:35.712: INFO: Deleting DaemonSet.extensions daemon-set took: 19.212979ms Jan 29 13:17:36.013: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.595098ms Jan 29 13:17:42.636: INFO: Number of nodes with available pods: 0 Jan 29 13:17:42.636: INFO: Number of running nodes: 0, number of available pods: 0 Jan 29 13:17:42.642: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8113/daemonsets","resourceVersion":"22314288"},"items":null} Jan 29 13:17:42.644: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8113/pods","resourceVersion":"22314288"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 29 13:17:42.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8113" for this suite. Jan 29 13:17:48.687: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 13:17:48.782: INFO: namespace daemonsets-8113 deletion completed in 6.119551553s • [SLOW TEST:59.756 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 29 13:17:48.783: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 29 13:17:48.917: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9cded748-7339-4771-ae81-10d5e3b8f1e7" in namespace "projected-6472" to be "success or failure" Jan 29 13:17:48.925: INFO: Pod "downwardapi-volume-9cded748-7339-4771-ae81-10d5e3b8f1e7": Phase="Pending", Reason="", readiness=false. Elapsed: 7.913202ms Jan 29 13:17:50.979: INFO: Pod "downwardapi-volume-9cded748-7339-4771-ae81-10d5e3b8f1e7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06183636s Jan 29 13:17:53.006: INFO: Pod "downwardapi-volume-9cded748-7339-4771-ae81-10d5e3b8f1e7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.088564565s Jan 29 13:17:55.012: INFO: Pod "downwardapi-volume-9cded748-7339-4771-ae81-10d5e3b8f1e7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.09504053s Jan 29 13:17:57.019: INFO: Pod "downwardapi-volume-9cded748-7339-4771-ae81-10d5e3b8f1e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.101607954s STEP: Saw pod success Jan 29 13:17:57.019: INFO: Pod "downwardapi-volume-9cded748-7339-4771-ae81-10d5e3b8f1e7" satisfied condition "success or failure" Jan 29 13:17:57.023: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-9cded748-7339-4771-ae81-10d5e3b8f1e7 container client-container: STEP: delete the pod Jan 29 13:17:57.097: INFO: Waiting for pod downwardapi-volume-9cded748-7339-4771-ae81-10d5e3b8f1e7 to disappear Jan 29 13:17:57.100: INFO: Pod downwardapi-volume-9cded748-7339-4771-ae81-10d5e3b8f1e7 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 29 13:17:57.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6472" for this suite. Jan 29 13:18:03.129: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 13:18:03.221: INFO: namespace projected-6472 deletion completed in 6.116502728s • [SLOW TEST:14.438 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 29 13:18:03.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-downwardapi-pq54 STEP: Creating a pod to test atomic-volume-subpath Jan 29 13:18:03.385: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-pq54" in namespace "subpath-620" to be "success or failure" Jan 29 13:18:03.413: INFO: Pod "pod-subpath-test-downwardapi-pq54": Phase="Pending", Reason="", readiness=false. Elapsed: 27.11502ms Jan 29 13:18:06.050: INFO: Pod "pod-subpath-test-downwardapi-pq54": Phase="Pending", Reason="", readiness=false. Elapsed: 2.664294013s Jan 29 13:18:08.058: INFO: Pod "pod-subpath-test-downwardapi-pq54": Phase="Pending", Reason="", readiness=false. Elapsed: 4.672926222s Jan 29 13:18:10.069: INFO: Pod "pod-subpath-test-downwardapi-pq54": Phase="Pending", Reason="", readiness=false. Elapsed: 6.683424202s Jan 29 13:18:12.080: INFO: Pod "pod-subpath-test-downwardapi-pq54": Phase="Running", Reason="", readiness=true. Elapsed: 8.694047225s Jan 29 13:18:14.088: INFO: Pod "pod-subpath-test-downwardapi-pq54": Phase="Running", Reason="", readiness=true. Elapsed: 10.702914056s Jan 29 13:18:16.098: INFO: Pod "pod-subpath-test-downwardapi-pq54": Phase="Running", Reason="", readiness=true. Elapsed: 12.712249968s Jan 29 13:18:18.108: INFO: Pod "pod-subpath-test-downwardapi-pq54": Phase="Running", Reason="", readiness=true. Elapsed: 14.722574802s Jan 29 13:18:20.117: INFO: Pod "pod-subpath-test-downwardapi-pq54": Phase="Running", Reason="", readiness=true. Elapsed: 16.731371945s Jan 29 13:18:22.128: INFO: Pod "pod-subpath-test-downwardapi-pq54": Phase="Running", Reason="", readiness=true. Elapsed: 18.742264605s Jan 29 13:18:24.136: INFO: Pod "pod-subpath-test-downwardapi-pq54": Phase="Running", Reason="", readiness=true. Elapsed: 20.75015146s Jan 29 13:18:26.145: INFO: Pod "pod-subpath-test-downwardapi-pq54": Phase="Running", Reason="", readiness=true. Elapsed: 22.759783036s Jan 29 13:18:28.153: INFO: Pod "pod-subpath-test-downwardapi-pq54": Phase="Running", Reason="", readiness=true. Elapsed: 24.767536355s Jan 29 13:18:30.161: INFO: Pod "pod-subpath-test-downwardapi-pq54": Phase="Running", Reason="", readiness=true. Elapsed: 26.775734678s Jan 29 13:18:32.200: INFO: Pod "pod-subpath-test-downwardapi-pq54": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.814723757s STEP: Saw pod success Jan 29 13:18:32.201: INFO: Pod "pod-subpath-test-downwardapi-pq54" satisfied condition "success or failure" Jan 29 13:18:32.205: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-downwardapi-pq54 container test-container-subpath-downwardapi-pq54: STEP: delete the pod Jan 29 13:18:32.351: INFO: Waiting for pod pod-subpath-test-downwardapi-pq54 to disappear Jan 29 13:18:32.365: INFO: Pod pod-subpath-test-downwardapi-pq54 no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-pq54 Jan 29 13:18:32.365: INFO: Deleting pod "pod-subpath-test-downwardapi-pq54" in namespace "subpath-620" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 29 13:18:32.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-620" for this suite. Jan 29 13:18:38.392: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 13:18:38.521: INFO: namespace subpath-620 deletion completed in 6.146331223s • [SLOW TEST:35.299 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 29 13:18:38.521: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 29 13:18:38.639: INFO: Creating ReplicaSet my-hostname-basic-1f400597-5c7c-4e1b-a5eb-26f86d1757c8 Jan 29 13:18:38.672: INFO: Pod name my-hostname-basic-1f400597-5c7c-4e1b-a5eb-26f86d1757c8: Found 0 pods out of 1 Jan 29 13:18:43.681: INFO: Pod name my-hostname-basic-1f400597-5c7c-4e1b-a5eb-26f86d1757c8: Found 1 pods out of 1 Jan 29 13:18:43.682: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-1f400597-5c7c-4e1b-a5eb-26f86d1757c8" is running Jan 29 13:18:47.694: INFO: Pod "my-hostname-basic-1f400597-5c7c-4e1b-a5eb-26f86d1757c8-m6z5r" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-29 13:18:38 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-29 13:18:38 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-1f400597-5c7c-4e1b-a5eb-26f86d1757c8]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-29 13:18:38 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-1f400597-5c7c-4e1b-a5eb-26f86d1757c8]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-29 13:18:38 +0000 UTC Reason: Message:}]) Jan 29 13:18:47.695: INFO: Trying to dial the pod Jan 29 13:18:52.718: INFO: Controller my-hostname-basic-1f400597-5c7c-4e1b-a5eb-26f86d1757c8: Got expected result from replica 1 [my-hostname-basic-1f400597-5c7c-4e1b-a5eb-26f86d1757c8-m6z5r]: "my-hostname-basic-1f400597-5c7c-4e1b-a5eb-26f86d1757c8-m6z5r", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 29 13:18:52.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-7768" for this suite. Jan 29 13:18:58.756: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 13:18:58.875: INFO: namespace replicaset-7768 deletion completed in 6.152550574s • [SLOW TEST:20.354 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 29 13:18:58.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-c7271d01-eda4-476b-9a01-6d44ba9235fb STEP: Creating a pod to test consume configMaps Jan 29 13:18:59.130: INFO: Waiting up to 5m0s for pod "pod-configmaps-c572bd9b-62ef-4e17-b6cb-c86963fa8458" in namespace "configmap-5684" to be "success or failure" Jan 29 13:18:59.139: INFO: Pod "pod-configmaps-c572bd9b-62ef-4e17-b6cb-c86963fa8458": Phase="Pending", Reason="", readiness=false. Elapsed: 8.107208ms Jan 29 13:19:01.148: INFO: Pod "pod-configmaps-c572bd9b-62ef-4e17-b6cb-c86963fa8458": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016620323s Jan 29 13:19:03.242: INFO: Pod "pod-configmaps-c572bd9b-62ef-4e17-b6cb-c86963fa8458": Phase="Pending", Reason="", readiness=false. Elapsed: 4.111387512s Jan 29 13:19:05.251: INFO: Pod "pod-configmaps-c572bd9b-62ef-4e17-b6cb-c86963fa8458": Phase="Pending", Reason="", readiness=false. Elapsed: 6.119953126s Jan 29 13:19:07.258: INFO: Pod "pod-configmaps-c572bd9b-62ef-4e17-b6cb-c86963fa8458": Phase="Pending", Reason="", readiness=false. Elapsed: 8.12679975s Jan 29 13:19:09.266: INFO: Pod "pod-configmaps-c572bd9b-62ef-4e17-b6cb-c86963fa8458": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.134782676s STEP: Saw pod success Jan 29 13:19:09.266: INFO: Pod "pod-configmaps-c572bd9b-62ef-4e17-b6cb-c86963fa8458" satisfied condition "success or failure" Jan 29 13:19:09.274: INFO: Trying to get logs from node iruya-node pod pod-configmaps-c572bd9b-62ef-4e17-b6cb-c86963fa8458 container configmap-volume-test: STEP: delete the pod Jan 29 13:19:09.410: INFO: Waiting for pod pod-configmaps-c572bd9b-62ef-4e17-b6cb-c86963fa8458 to disappear Jan 29 13:19:09.417: INFO: Pod pod-configmaps-c572bd9b-62ef-4e17-b6cb-c86963fa8458 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 29 13:19:09.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5684" for this suite. Jan 29 13:19:15.455: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 13:19:15.587: INFO: namespace configmap-5684 deletion completed in 6.160829882s • [SLOW TEST:16.711 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 29 13:19:15.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-f24j STEP: Creating a pod to test atomic-volume-subpath Jan 29 13:19:15.784: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-f24j" in namespace "subpath-8450" to be "success or failure" Jan 29 13:19:15.810: INFO: Pod "pod-subpath-test-configmap-f24j": Phase="Pending", Reason="", readiness=false. Elapsed: 26.377234ms Jan 29 13:19:17.821: INFO: Pod "pod-subpath-test-configmap-f24j": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037147073s Jan 29 13:19:19.830: INFO: Pod "pod-subpath-test-configmap-f24j": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046392948s Jan 29 13:19:21.851: INFO: Pod "pod-subpath-test-configmap-f24j": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06748286s Jan 29 13:19:23.870: INFO: Pod "pod-subpath-test-configmap-f24j": Phase="Running", Reason="", readiness=true. Elapsed: 8.086015084s Jan 29 13:19:25.906: INFO: Pod "pod-subpath-test-configmap-f24j": Phase="Running", Reason="", readiness=true. Elapsed: 10.121749519s Jan 29 13:19:27.914: INFO: Pod "pod-subpath-test-configmap-f24j": Phase="Running", Reason="", readiness=true. Elapsed: 12.130313932s Jan 29 13:19:29.921: INFO: Pod "pod-subpath-test-configmap-f24j": Phase="Running", Reason="", readiness=true. Elapsed: 14.136978924s Jan 29 13:19:31.929: INFO: Pod "pod-subpath-test-configmap-f24j": Phase="Running", Reason="", readiness=true. Elapsed: 16.144904141s Jan 29 13:19:33.948: INFO: Pod "pod-subpath-test-configmap-f24j": Phase="Running", Reason="", readiness=true. Elapsed: 18.164597777s Jan 29 13:19:35.959: INFO: Pod "pod-subpath-test-configmap-f24j": Phase="Running", Reason="", readiness=true. Elapsed: 20.175589664s Jan 29 13:19:37.970: INFO: Pod "pod-subpath-test-configmap-f24j": Phase="Running", Reason="", readiness=true. Elapsed: 22.18620999s Jan 29 13:19:39.983: INFO: Pod "pod-subpath-test-configmap-f24j": Phase="Running", Reason="", readiness=true. Elapsed: 24.198743712s Jan 29 13:19:41.993: INFO: Pod "pod-subpath-test-configmap-f24j": Phase="Running", Reason="", readiness=true. Elapsed: 26.208850828s Jan 29 13:19:44.016: INFO: Pod "pod-subpath-test-configmap-f24j": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.23175028s STEP: Saw pod success Jan 29 13:19:44.016: INFO: Pod "pod-subpath-test-configmap-f24j" satisfied condition "success or failure" Jan 29 13:19:44.028: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-f24j container test-container-subpath-configmap-f24j: STEP: delete the pod Jan 29 13:19:44.162: INFO: Waiting for pod pod-subpath-test-configmap-f24j to disappear Jan 29 13:19:44.169: INFO: Pod pod-subpath-test-configmap-f24j no longer exists STEP: Deleting pod pod-subpath-test-configmap-f24j Jan 29 13:19:44.169: INFO: Deleting pod "pod-subpath-test-configmap-f24j" in namespace "subpath-8450" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 29 13:19:44.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8450" for this suite. Jan 29 13:19:50.217: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 13:19:50.386: INFO: namespace subpath-8450 deletion completed in 6.206613497s • [SLOW TEST:34.797 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 29 13:19:50.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: executing a command with run --rm and attach with stdin Jan 29 13:19:50.508: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7406 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Jan 29 13:20:01.027: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0129 13:19:59.351612 855 log.go:172] (0xc000870370) (0xc000726c80) Create stream\nI0129 13:19:59.351912 855 log.go:172] (0xc000870370) (0xc000726c80) Stream added, broadcasting: 1\nI0129 13:19:59.404168 855 log.go:172] (0xc000870370) Reply frame received for 1\nI0129 13:19:59.405506 855 log.go:172] (0xc000870370) (0xc0009aa000) Create stream\nI0129 13:19:59.405728 855 log.go:172] (0xc000870370) (0xc0009aa000) Stream added, broadcasting: 3\nI0129 13:19:59.421032 855 log.go:172] (0xc000870370) Reply frame received for 3\nI0129 13:19:59.421305 855 log.go:172] (0xc000870370) (0xc0009aa0a0) Create stream\nI0129 13:19:59.421343 855 log.go:172] (0xc000870370) (0xc0009aa0a0) Stream added, broadcasting: 5\nI0129 13:19:59.426057 855 log.go:172] (0xc000870370) Reply frame received for 5\nI0129 13:19:59.426125 855 log.go:172] (0xc000870370) (0xc0005e0140) Create stream\nI0129 13:19:59.426145 855 log.go:172] (0xc000870370) (0xc0005e0140) Stream added, broadcasting: 7\nI0129 13:19:59.429312 855 log.go:172] (0xc000870370) Reply frame received for 7\nI0129 13:19:59.429898 855 log.go:172] (0xc0009aa000) (3) Writing data frame\nI0129 13:19:59.430039 855 log.go:172] (0xc0009aa000) (3) Writing data frame\nI0129 13:19:59.454356 855 log.go:172] (0xc000870370) Data frame received for 5\nI0129 13:19:59.454377 855 log.go:172] (0xc0009aa0a0) (5) Data frame handling\nI0129 13:19:59.454402 855 log.go:172] (0xc0009aa0a0) (5) Data frame sent\nI0129 13:19:59.463411 855 log.go:172] (0xc000870370) Data frame received for 5\nI0129 13:19:59.463443 855 log.go:172] (0xc0009aa0a0) (5) Data frame handling\nI0129 13:19:59.463512 855 log.go:172] (0xc0009aa0a0) (5) Data frame sent\nI0129 13:20:00.942247 855 log.go:172] (0xc000870370) (0xc0009aa0a0) Stream removed, broadcasting: 5\nI0129 13:20:00.942619 855 log.go:172] (0xc000870370) Data frame received for 1\nI0129 13:20:00.942990 855 log.go:172] (0xc000870370) (0xc0009aa000) Stream removed, broadcasting: 3\nI0129 13:20:00.943251 855 log.go:172] (0xc000726c80) (1) Data frame handling\nI0129 13:20:00.943321 855 log.go:172] (0xc000870370) (0xc0005e0140) Stream removed, broadcasting: 7\nI0129 13:20:00.943612 855 log.go:172] (0xc000726c80) (1) Data frame sent\nI0129 13:20:00.943691 855 log.go:172] (0xc000870370) (0xc000726c80) Stream removed, broadcasting: 1\nI0129 13:20:00.943732 855 log.go:172] (0xc000870370) Go away received\nI0129 13:20:00.944915 855 log.go:172] (0xc000870370) (0xc000726c80) Stream removed, broadcasting: 1\nI0129 13:20:00.945080 855 log.go:172] (0xc000870370) (0xc0009aa000) Stream removed, broadcasting: 3\nI0129 13:20:00.945174 855 log.go:172] (0xc000870370) (0xc0009aa0a0) Stream removed, broadcasting: 5\nI0129 13:20:00.945240 855 log.go:172] (0xc000870370) (0xc0005e0140) Stream removed, broadcasting: 7\n" Jan 29 13:20:01.028: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 29 13:20:03.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7406" for this suite. Jan 29 13:20:09.070: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 13:20:09.188: INFO: namespace kubectl-7406 deletion completed in 6.147834414s • [SLOW TEST:18.802 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 29 13:20:09.189: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-3a5a4321-7d1c-489c-90dc-fa0df751bc78 STEP: Creating a pod to test consume secrets Jan 29 13:20:09.413: INFO: Waiting up to 5m0s for pod "pod-secrets-6eb5e737-ca6c-44a5-90ca-9cccf5c1c379" in namespace "secrets-3488" to be "success or failure" Jan 29 13:20:09.506: INFO: Pod "pod-secrets-6eb5e737-ca6c-44a5-90ca-9cccf5c1c379": Phase="Pending", Reason="", readiness=false. Elapsed: 93.028022ms Jan 29 13:20:11.514: INFO: Pod "pod-secrets-6eb5e737-ca6c-44a5-90ca-9cccf5c1c379": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100744621s Jan 29 13:20:13.525: INFO: Pod "pod-secrets-6eb5e737-ca6c-44a5-90ca-9cccf5c1c379": Phase="Pending", Reason="", readiness=false. Elapsed: 4.111351395s Jan 29 13:20:15.535: INFO: Pod "pod-secrets-6eb5e737-ca6c-44a5-90ca-9cccf5c1c379": Phase="Pending", Reason="", readiness=false. Elapsed: 6.121300477s Jan 29 13:20:17.600: INFO: Pod "pod-secrets-6eb5e737-ca6c-44a5-90ca-9cccf5c1c379": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.186403378s STEP: Saw pod success Jan 29 13:20:17.600: INFO: Pod "pod-secrets-6eb5e737-ca6c-44a5-90ca-9cccf5c1c379" satisfied condition "success or failure" Jan 29 13:20:17.607: INFO: Trying to get logs from node iruya-node pod pod-secrets-6eb5e737-ca6c-44a5-90ca-9cccf5c1c379 container secret-volume-test: STEP: delete the pod Jan 29 13:20:17.739: INFO: Waiting for pod pod-secrets-6eb5e737-ca6c-44a5-90ca-9cccf5c1c379 to disappear Jan 29 13:20:17.749: INFO: Pod pod-secrets-6eb5e737-ca6c-44a5-90ca-9cccf5c1c379 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 29 13:20:17.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3488" for this suite. Jan 29 13:20:23.800: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 13:20:23.907: INFO: namespace secrets-3488 deletion completed in 6.153017784s STEP: Destroying namespace "secret-namespace-2969" for this suite. Jan 29 13:20:29.935: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 13:20:30.040: INFO: namespace secret-namespace-2969 deletion completed in 6.132409297s • [SLOW TEST:20.851 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 29 13:20:30.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 29 13:20:30.122: INFO: Waiting up to 5m0s for pod "downwardapi-volume-de99c0f0-4cf3-40f2-b55a-5862cc396385" in namespace "projected-3311" to be "success or failure" Jan 29 13:20:30.149: INFO: Pod "downwardapi-volume-de99c0f0-4cf3-40f2-b55a-5862cc396385": Phase="Pending", Reason="", readiness=false. Elapsed: 27.18852ms Jan 29 13:20:32.156: INFO: Pod "downwardapi-volume-de99c0f0-4cf3-40f2-b55a-5862cc396385": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033888807s Jan 29 13:20:34.165: INFO: Pod "downwardapi-volume-de99c0f0-4cf3-40f2-b55a-5862cc396385": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043511036s Jan 29 13:20:36.243: INFO: Pod "downwardapi-volume-de99c0f0-4cf3-40f2-b55a-5862cc396385": Phase="Pending", Reason="", readiness=false. Elapsed: 6.120903785s Jan 29 13:20:38.252: INFO: Pod "downwardapi-volume-de99c0f0-4cf3-40f2-b55a-5862cc396385": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.130390437s STEP: Saw pod success Jan 29 13:20:38.252: INFO: Pod "downwardapi-volume-de99c0f0-4cf3-40f2-b55a-5862cc396385" satisfied condition "success or failure" Jan 29 13:20:38.256: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-de99c0f0-4cf3-40f2-b55a-5862cc396385 container client-container: STEP: delete the pod Jan 29 13:20:38.400: INFO: Waiting for pod downwardapi-volume-de99c0f0-4cf3-40f2-b55a-5862cc396385 to disappear Jan 29 13:20:38.407: INFO: Pod downwardapi-volume-de99c0f0-4cf3-40f2-b55a-5862cc396385 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 29 13:20:38.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3311" for this suite. Jan 29 13:20:44.437: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 13:20:44.559: INFO: namespace projected-3311 deletion completed in 6.143968691s • [SLOW TEST:14.519 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 29 13:20:44.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 29 13:20:44.679: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2ddf4222-3d9d-49f8-b3b5-19272e6d5e92" in namespace "downward-api-9579" to be "success or failure" Jan 29 13:20:44.686: INFO: Pod "downwardapi-volume-2ddf4222-3d9d-49f8-b3b5-19272e6d5e92": Phase="Pending", Reason="", readiness=false. Elapsed: 6.836335ms Jan 29 13:20:46.700: INFO: Pod "downwardapi-volume-2ddf4222-3d9d-49f8-b3b5-19272e6d5e92": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020656746s Jan 29 13:20:48.708: INFO: Pod "downwardapi-volume-2ddf4222-3d9d-49f8-b3b5-19272e6d5e92": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029380756s Jan 29 13:20:50.728: INFO: Pod "downwardapi-volume-2ddf4222-3d9d-49f8-b3b5-19272e6d5e92": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048830977s Jan 29 13:20:52.735: INFO: Pod "downwardapi-volume-2ddf4222-3d9d-49f8-b3b5-19272e6d5e92": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.056208172s STEP: Saw pod success Jan 29 13:20:52.735: INFO: Pod "downwardapi-volume-2ddf4222-3d9d-49f8-b3b5-19272e6d5e92" satisfied condition "success or failure" Jan 29 13:20:52.739: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-2ddf4222-3d9d-49f8-b3b5-19272e6d5e92 container client-container: STEP: delete the pod Jan 29 13:20:52.829: INFO: Waiting for pod downwardapi-volume-2ddf4222-3d9d-49f8-b3b5-19272e6d5e92 to disappear Jan 29 13:20:52.834: INFO: Pod downwardapi-volume-2ddf4222-3d9d-49f8-b3b5-19272e6d5e92 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 29 13:20:52.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9579" for this suite. Jan 29 13:20:58.942: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 13:20:59.053: INFO: namespace downward-api-9579 deletion completed in 6.211832194s • [SLOW TEST:14.494 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 29 13:20:59.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-5911 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-5911 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5911 Jan 29 13:20:59.156: INFO: Found 0 stateful pods, waiting for 1 Jan 29 13:21:09.167: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Jan 29 13:21:09.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5911 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 29 13:21:09.854: INFO: stderr: "I0129 13:21:09.427180 890 log.go:172] (0xc0008ac370) (0xc0008a86e0) Create stream\nI0129 13:21:09.427659 890 log.go:172] (0xc0008ac370) (0xc0008a86e0) Stream added, broadcasting: 1\nI0129 13:21:09.440052 890 log.go:172] (0xc0008ac370) Reply frame received for 1\nI0129 13:21:09.440114 890 log.go:172] (0xc0008ac370) (0xc0008a8780) Create stream\nI0129 13:21:09.440122 890 log.go:172] (0xc0008ac370) (0xc0008a8780) Stream added, broadcasting: 3\nI0129 13:21:09.443507 890 log.go:172] (0xc0008ac370) Reply frame received for 3\nI0129 13:21:09.443688 890 log.go:172] (0xc0008ac370) (0xc00062e280) Create stream\nI0129 13:21:09.443703 890 log.go:172] (0xc0008ac370) (0xc00062e280) Stream added, broadcasting: 5\nI0129 13:21:09.448293 890 log.go:172] (0xc0008ac370) Reply frame received for 5\nI0129 13:21:09.624191 890 log.go:172] (0xc0008ac370) Data frame received for 5\nI0129 13:21:09.624253 890 log.go:172] (0xc00062e280) (5) Data frame handling\nI0129 13:21:09.624285 890 log.go:172] (0xc00062e280) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0129 13:21:09.703215 890 log.go:172] (0xc0008ac370) Data frame received for 3\nI0129 13:21:09.703291 890 log.go:172] (0xc0008a8780) (3) Data frame handling\nI0129 13:21:09.703316 890 log.go:172] (0xc0008a8780) (3) Data frame sent\nI0129 13:21:09.838657 890 log.go:172] (0xc0008ac370) Data frame received for 1\nI0129 13:21:09.838901 890 log.go:172] (0xc0008ac370) (0xc0008a8780) Stream removed, broadcasting: 3\nI0129 13:21:09.839143 890 log.go:172] (0xc0008a86e0) (1) Data frame handling\nI0129 13:21:09.839196 890 log.go:172] (0xc0008a86e0) (1) Data frame sent\nI0129 13:21:09.839301 890 log.go:172] (0xc0008ac370) (0xc00062e280) Stream removed, broadcasting: 5\nI0129 13:21:09.839365 890 log.go:172] (0xc0008ac370) (0xc0008a86e0) Stream removed, broadcasting: 1\nI0129 13:21:09.839389 890 log.go:172] (0xc0008ac370) Go away received\nI0129 13:21:09.841203 890 log.go:172] (0xc0008ac370) (0xc0008a86e0) Stream removed, broadcasting: 1\nI0129 13:21:09.841228 890 log.go:172] (0xc0008ac370) (0xc0008a8780) Stream removed, broadcasting: 3\nI0129 13:21:09.841242 890 log.go:172] (0xc0008ac370) (0xc00062e280) Stream removed, broadcasting: 5\n" Jan 29 13:21:09.854: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 29 13:21:09.854: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 29 13:21:09.862: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jan 29 13:21:19.878: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 29 13:21:19.878: INFO: Waiting for statefulset status.replicas updated to 0 Jan 29 13:21:19.908: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999328s Jan 29 13:21:20.925: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.990493519s Jan 29 13:21:21.936: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.973299955s Jan 29 13:21:22.951: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.962385984s Jan 29 13:21:23.968: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.94750327s Jan 29 13:21:25.009: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.930511105s Jan 29 13:21:26.019: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.889445979s Jan 29 13:21:27.041: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.879551237s Jan 29 13:21:28.051: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.858350396s Jan 29 13:21:29.061: INFO: Verifying statefulset ss doesn't scale past 1 for another 848.312698ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5911 Jan 29 13:21:30.070: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5911 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 29 13:21:30.796: INFO: stderr: "I0129 13:21:30.347221 910 log.go:172] (0xc000116dc0) (0xc0002b86e0) Create stream\nI0129 13:21:30.347593 910 log.go:172] (0xc000116dc0) (0xc0002b86e0) Stream added, broadcasting: 1\nI0129 13:21:30.358782 910 log.go:172] (0xc000116dc0) Reply frame received for 1\nI0129 13:21:30.359421 910 log.go:172] (0xc000116dc0) (0xc0007e0000) Create stream\nI0129 13:21:30.359615 910 log.go:172] (0xc000116dc0) (0xc0007e0000) Stream added, broadcasting: 3\nI0129 13:21:30.369557 910 log.go:172] (0xc000116dc0) Reply frame received for 3\nI0129 13:21:30.369707 910 log.go:172] (0xc000116dc0) (0xc0002b8000) Create stream\nI0129 13:21:30.369730 910 log.go:172] (0xc000116dc0) (0xc0002b8000) Stream added, broadcasting: 5\nI0129 13:21:30.374129 910 log.go:172] (0xc000116dc0) Reply frame received for 5\nI0129 13:21:30.528831 910 log.go:172] (0xc000116dc0) Data frame received for 5\nI0129 13:21:30.529756 910 log.go:172] (0xc0002b8000) (5) Data frame handling\nI0129 13:21:30.530178 910 log.go:172] (0xc000116dc0) Data frame received for 3\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0129 13:21:30.530741 910 log.go:172] (0xc0007e0000) (3) Data frame handling\nI0129 13:21:30.530836 910 log.go:172] (0xc0002b8000) (5) Data frame sent\nI0129 13:21:30.530862 910 log.go:172] (0xc0007e0000) (3) Data frame sent\nI0129 13:21:30.774227 910 log.go:172] (0xc000116dc0) (0xc0007e0000) Stream removed, broadcasting: 3\nI0129 13:21:30.774492 910 log.go:172] (0xc000116dc0) Data frame received for 1\nI0129 13:21:30.774542 910 log.go:172] (0xc0002b86e0) (1) Data frame handling\nI0129 13:21:30.774604 910 log.go:172] (0xc0002b86e0) (1) Data frame sent\nI0129 13:21:30.774640 910 log.go:172] (0xc000116dc0) (0xc0002b86e0) Stream removed, broadcasting: 1\nI0129 13:21:30.774675 910 log.go:172] (0xc000116dc0) (0xc0002b8000) Stream removed, broadcasting: 5\nI0129 13:21:30.774715 910 log.go:172] (0xc000116dc0) Go away received\nI0129 13:21:30.776081 910 log.go:172] (0xc000116dc0) (0xc0002b86e0) Stream removed, broadcasting: 1\nI0129 13:21:30.776106 910 log.go:172] (0xc000116dc0) (0xc0007e0000) Stream removed, broadcasting: 3\nI0129 13:21:30.776115 910 log.go:172] (0xc000116dc0) (0xc0002b8000) Stream removed, broadcasting: 5\n" Jan 29 13:21:30.796: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 29 13:21:30.796: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 29 13:21:30.809: INFO: Found 1 stateful pods, waiting for 3 Jan 29 13:21:40.816: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 29 13:21:40.816: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 29 13:21:40.816: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 29 13:21:50.827: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 29 13:21:50.827: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 29 13:21:50.827: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Jan 29 13:21:50.843: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5911 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 29 13:21:51.403: INFO: stderr: "I0129 13:21:51.081632 931 log.go:172] (0xc000900420) (0xc000a36640) Create stream\nI0129 13:21:51.081859 931 log.go:172] (0xc000900420) (0xc000a36640) Stream added, broadcasting: 1\nI0129 13:21:51.089499 931 log.go:172] (0xc000900420) Reply frame received for 1\nI0129 13:21:51.089567 931 log.go:172] (0xc000900420) (0xc0004d61e0) Create stream\nI0129 13:21:51.089578 931 log.go:172] (0xc000900420) (0xc0004d61e0) Stream added, broadcasting: 3\nI0129 13:21:51.091048 931 log.go:172] (0xc000900420) Reply frame received for 3\nI0129 13:21:51.091085 931 log.go:172] (0xc000900420) (0xc000a40000) Create stream\nI0129 13:21:51.091100 931 log.go:172] (0xc000900420) (0xc000a40000) Stream added, broadcasting: 5\nI0129 13:21:51.092361 931 log.go:172] (0xc000900420) Reply frame received for 5\nI0129 13:21:51.218698 931 log.go:172] (0xc000900420) Data frame received for 5\nI0129 13:21:51.218971 931 log.go:172] (0xc000a40000) (5) Data frame handling\nI0129 13:21:51.219013 931 log.go:172] (0xc000a40000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0129 13:21:51.219076 931 log.go:172] (0xc000900420) Data frame received for 3\nI0129 13:21:51.219101 931 log.go:172] (0xc0004d61e0) (3) Data frame handling\nI0129 13:21:51.219132 931 log.go:172] (0xc0004d61e0) (3) Data frame sent\nI0129 13:21:51.383696 931 log.go:172] (0xc000900420) Data frame received for 1\nI0129 13:21:51.383965 931 log.go:172] (0xc000900420) (0xc0004d61e0) Stream removed, broadcasting: 3\nI0129 13:21:51.384187 931 log.go:172] (0xc000a36640) (1) Data frame handling\nI0129 13:21:51.384223 931 log.go:172] (0xc000a36640) (1) Data frame sent\nI0129 13:21:51.384408 931 log.go:172] (0xc000900420) (0xc000a40000) Stream removed, broadcasting: 5\nI0129 13:21:51.384541 931 log.go:172] (0xc000900420) (0xc000a36640) Stream removed, broadcasting: 1\nI0129 13:21:51.384649 931 log.go:172] (0xc000900420) Go away received\nI0129 13:21:51.385894 931 log.go:172] (0xc000900420) (0xc000a36640) Stream removed, broadcasting: 1\nI0129 13:21:51.385922 931 log.go:172] (0xc000900420) (0xc0004d61e0) Stream removed, broadcasting: 3\nI0129 13:21:51.385932 931 log.go:172] (0xc000900420) (0xc000a40000) Stream removed, broadcasting: 5\n" Jan 29 13:21:51.404: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 29 13:21:51.404: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 29 13:21:51.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5911 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 29 13:21:52.044: INFO: stderr: "I0129 13:21:51.675283 950 log.go:172] (0xc00013adc0) (0xc0003ec820) Create stream\nI0129 13:21:51.675624 950 log.go:172] (0xc00013adc0) (0xc0003ec820) Stream added, broadcasting: 1\nI0129 13:21:51.678414 950 log.go:172] (0xc00013adc0) Reply frame received for 1\nI0129 13:21:51.678445 950 log.go:172] (0xc00013adc0) (0xc0003ec8c0) Create stream\nI0129 13:21:51.678450 950 log.go:172] (0xc00013adc0) (0xc0003ec8c0) Stream added, broadcasting: 3\nI0129 13:21:51.681883 950 log.go:172] (0xc00013adc0) Reply frame received for 3\nI0129 13:21:51.682297 950 log.go:172] (0xc00013adc0) (0xc000a7e000) Create stream\nI0129 13:21:51.682350 950 log.go:172] (0xc00013adc0) (0xc000a7e000) Stream added, broadcasting: 5\nI0129 13:21:51.690504 950 log.go:172] (0xc00013adc0) Reply frame received for 5\nI0129 13:21:51.826635 950 log.go:172] (0xc00013adc0) Data frame received for 5\nI0129 13:21:51.826755 950 log.go:172] (0xc000a7e000) (5) Data frame handling\nI0129 13:21:51.826775 950 log.go:172] (0xc000a7e000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0129 13:21:51.916948 950 log.go:172] (0xc00013adc0) Data frame received for 3\nI0129 13:21:51.917135 950 log.go:172] (0xc0003ec8c0) (3) Data frame handling\nI0129 13:21:51.917170 950 log.go:172] (0xc0003ec8c0) (3) Data frame sent\nI0129 13:21:52.031752 950 log.go:172] (0xc00013adc0) Data frame received for 1\nI0129 13:21:52.031901 950 log.go:172] (0xc00013adc0) (0xc0003ec8c0) Stream removed, broadcasting: 3\nI0129 13:21:52.031973 950 log.go:172] (0xc0003ec820) (1) Data frame handling\nI0129 13:21:52.031999 950 log.go:172] (0xc0003ec820) (1) Data frame sent\nI0129 13:21:52.032012 950 log.go:172] (0xc00013adc0) (0xc0003ec820) Stream removed, broadcasting: 1\nI0129 13:21:52.032827 950 log.go:172] (0xc00013adc0) (0xc000a7e000) Stream removed, broadcasting: 5\nI0129 13:21:52.032881 950 log.go:172] (0xc00013adc0) Go away received\nI0129 13:21:52.032927 950 log.go:172] (0xc00013adc0) (0xc0003ec820) Stream removed, broadcasting: 1\nI0129 13:21:52.032946 950 log.go:172] (0xc00013adc0) (0xc0003ec8c0) Stream removed, broadcasting: 3\nI0129 13:21:52.032956 950 log.go:172] (0xc00013adc0) (0xc000a7e000) Stream removed, broadcasting: 5\n" Jan 29 13:21:52.045: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 29 13:21:52.045: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 29 13:21:52.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5911 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 29 13:21:52.648: INFO: stderr: "I0129 13:21:52.322952 971 log.go:172] (0xc000116e70) (0xc000632960) Create stream\nI0129 13:21:52.323470 971 log.go:172] (0xc000116e70) (0xc000632960) Stream added, broadcasting: 1\nI0129 13:21:52.329032 971 log.go:172] (0xc000116e70) Reply frame received for 1\nI0129 13:21:52.329102 971 log.go:172] (0xc000116e70) (0xc00071a000) Create stream\nI0129 13:21:52.329119 971 log.go:172] (0xc000116e70) (0xc00071a000) Stream added, broadcasting: 3\nI0129 13:21:52.330772 971 log.go:172] (0xc000116e70) Reply frame received for 3\nI0129 13:21:52.330877 971 log.go:172] (0xc000116e70) (0xc000752000) Create stream\nI0129 13:21:52.330894 971 log.go:172] (0xc000116e70) (0xc000752000) Stream added, broadcasting: 5\nI0129 13:21:52.331821 971 log.go:172] (0xc000116e70) Reply frame received for 5\nI0129 13:21:52.429487 971 log.go:172] (0xc000116e70) Data frame received for 5\nI0129 13:21:52.429642 971 log.go:172] (0xc000752000) (5) Data frame handling\nI0129 13:21:52.429678 971 log.go:172] (0xc000752000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0129 13:21:52.458543 971 log.go:172] (0xc000116e70) Data frame received for 3\nI0129 13:21:52.458675 971 log.go:172] (0xc00071a000) (3) Data frame handling\nI0129 13:21:52.458712 971 log.go:172] (0xc00071a000) (3) Data frame sent\nI0129 13:21:52.631577 971 log.go:172] (0xc000116e70) Data frame received for 1\nI0129 13:21:52.631730 971 log.go:172] (0xc000116e70) (0xc00071a000) Stream removed, broadcasting: 3\nI0129 13:21:52.631857 971 log.go:172] (0xc000632960) (1) Data frame handling\nI0129 13:21:52.631884 971 log.go:172] (0xc000116e70) (0xc000752000) Stream removed, broadcasting: 5\nI0129 13:21:52.631935 971 log.go:172] (0xc000632960) (1) Data frame sent\nI0129 13:21:52.631968 971 log.go:172] (0xc000116e70) (0xc000632960) Stream removed, broadcasting: 1\nI0129 13:21:52.631994 971 log.go:172] (0xc000116e70) Go away received\nI0129 13:21:52.633428 971 log.go:172] (0xc000116e70) (0xc000632960) Stream removed, broadcasting: 1\nI0129 13:21:52.633439 971 log.go:172] (0xc000116e70) (0xc00071a000) Stream removed, broadcasting: 3\nI0129 13:21:52.633446 971 log.go:172] (0xc000116e70) (0xc000752000) Stream removed, broadcasting: 5\n" Jan 29 13:21:52.649: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 29 13:21:52.649: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 29 13:21:52.649: INFO: Waiting for statefulset status.replicas updated to 0 Jan 29 13:21:52.657: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Jan 29 13:22:02.679: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 29 13:22:02.679: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jan 29 13:22:02.679: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jan 29 13:22:02.703: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999751s Jan 29 13:22:03.716: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.985596756s Jan 29 13:22:04.726: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.972253448s Jan 29 13:22:05.737: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.962304937s Jan 29 13:22:06.772: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.951433068s Jan 29 13:22:07.833: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.915569257s Jan 29 13:22:08.882: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.855354965s Jan 29 13:22:09.894: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.806523535s Jan 29 13:22:10.903: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.794706016s Jan 29 13:22:11.917: INFO: Verifying statefulset ss doesn't scale past 3 for another 784.8624ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5911 Jan 29 13:22:12.934: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5911 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 29 13:22:13.588: INFO: stderr: "I0129 13:22:13.207840 991 log.go:172] (0xc0009ba420) (0xc00084a640) Create stream\nI0129 13:22:13.208036 991 log.go:172] (0xc0009ba420) (0xc00084a640) Stream added, broadcasting: 1\nI0129 13:22:13.223540 991 log.go:172] (0xc0009ba420) Reply frame received for 1\nI0129 13:22:13.223648 991 log.go:172] (0xc0009ba420) (0xc000586320) Create stream\nI0129 13:22:13.223670 991 log.go:172] (0xc0009ba420) (0xc000586320) Stream added, broadcasting: 3\nI0129 13:22:13.227422 991 log.go:172] (0xc0009ba420) Reply frame received for 3\nI0129 13:22:13.227457 991 log.go:172] (0xc0009ba420) (0xc0005863c0) Create stream\nI0129 13:22:13.227474 991 log.go:172] (0xc0009ba420) (0xc0005863c0) Stream added, broadcasting: 5\nI0129 13:22:13.229113 991 log.go:172] (0xc0009ba420) Reply frame received for 5\nI0129 13:22:13.417938 991 log.go:172] (0xc0009ba420) Data frame received for 5\nI0129 13:22:13.418042 991 log.go:172] (0xc0005863c0) (5) Data frame handling\nI0129 13:22:13.418071 991 log.go:172] (0xc0005863c0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0129 13:22:13.418120 991 log.go:172] (0xc0009ba420) Data frame received for 3\nI0129 13:22:13.418129 991 log.go:172] (0xc000586320) (3) Data frame handling\nI0129 13:22:13.418144 991 log.go:172] (0xc000586320) (3) Data frame sent\nI0129 13:22:13.570139 991 log.go:172] (0xc0009ba420) Data frame received for 1\nI0129 13:22:13.570316 991 log.go:172] (0xc0009ba420) (0xc0005863c0) Stream removed, broadcasting: 5\nI0129 13:22:13.570390 991 log.go:172] (0xc00084a640) (1) Data frame handling\nI0129 13:22:13.570432 991 log.go:172] (0xc0009ba420) (0xc000586320) Stream removed, broadcasting: 3\nI0129 13:22:13.570486 991 log.go:172] (0xc00084a640) (1) Data frame sent\nI0129 13:22:13.570522 991 log.go:172] (0xc0009ba420) (0xc00084a640) Stream removed, broadcasting: 1\nI0129 13:22:13.570566 991 log.go:172] (0xc0009ba420) Go away received\nI0129 13:22:13.572049 991 log.go:172] (0xc0009ba420) (0xc00084a640) Stream removed, broadcasting: 1\nI0129 13:22:13.572067 991 log.go:172] (0xc0009ba420) (0xc000586320) Stream removed, broadcasting: 3\nI0129 13:22:13.572078 991 log.go:172] (0xc0009ba420) (0xc0005863c0) Stream removed, broadcasting: 5\n" Jan 29 13:22:13.588: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 29 13:22:13.588: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 29 13:22:13.588: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5911 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 29 13:22:14.185: INFO: stderr: "I0129 13:22:13.884989 1011 log.go:172] (0xc000920160) (0xc0005d88c0) Create stream\nI0129 13:22:13.885449 1011 log.go:172] (0xc000920160) (0xc0005d88c0) Stream added, broadcasting: 1\nI0129 13:22:13.890856 1011 log.go:172] (0xc000920160) Reply frame received for 1\nI0129 13:22:13.891071 1011 log.go:172] (0xc000920160) (0xc00084c000) Create stream\nI0129 13:22:13.891105 1011 log.go:172] (0xc000920160) (0xc00084c000) Stream added, broadcasting: 3\nI0129 13:22:13.892514 1011 log.go:172] (0xc000920160) Reply frame received for 3\nI0129 13:22:13.892543 1011 log.go:172] (0xc000920160) (0xc0005d8960) Create stream\nI0129 13:22:13.892557 1011 log.go:172] (0xc000920160) (0xc0005d8960) Stream added, broadcasting: 5\nI0129 13:22:13.893875 1011 log.go:172] (0xc000920160) Reply frame received for 5\nI0129 13:22:14.000322 1011 log.go:172] (0xc000920160) Data frame received for 3\nI0129 13:22:14.000816 1011 log.go:172] (0xc00084c000) (3) Data frame handling\nI0129 13:22:14.000941 1011 log.go:172] (0xc00084c000) (3) Data frame sent\nI0129 13:22:14.001587 1011 log.go:172] (0xc000920160) Data frame received for 5\nI0129 13:22:14.002002 1011 log.go:172] (0xc0005d8960) (5) Data frame handling\nI0129 13:22:14.002148 1011 log.go:172] (0xc0005d8960) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0129 13:22:14.150233 1011 log.go:172] (0xc000920160) Data frame received for 1\nI0129 13:22:14.150710 1011 log.go:172] (0xc0005d88c0) (1) Data frame handling\nI0129 13:22:14.150851 1011 log.go:172] (0xc0005d88c0) (1) Data frame sent\nI0129 13:22:14.156651 1011 log.go:172] (0xc000920160) (0xc0005d88c0) Stream removed, broadcasting: 1\nI0129 13:22:14.156938 1011 log.go:172] (0xc000920160) (0xc00084c000) Stream removed, broadcasting: 3\nI0129 13:22:14.157340 1011 log.go:172] (0xc000920160) (0xc0005d8960) Stream removed, broadcasting: 5\nI0129 13:22:14.157902 1011 log.go:172] (0xc000920160) Go away received\nI0129 13:22:14.159670 1011 log.go:172] (0xc000920160) (0xc0005d88c0) Stream removed, broadcasting: 1\nI0129 13:22:14.159752 1011 log.go:172] (0xc000920160) (0xc00084c000) Stream removed, broadcasting: 3\nI0129 13:22:14.159765 1011 log.go:172] (0xc000920160) (0xc0005d8960) Stream removed, broadcasting: 5\n" Jan 29 13:22:14.185: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 29 13:22:14.185: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 29 13:22:14.186: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5911 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 29 13:22:14.715: INFO: rc: 126 Jan 29 13:22:14.716: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5911 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] OCI runtime exec failed: exec failed: container_linux.go:338: creating new parent process caused "container_linux.go:1897: running lstat on namespace path \"/proc/1694/ns/ipc\" caused \"lstat /proc/1694/ns/ipc: no such file or directory\"": unknown I0129 13:22:14.403340 1031 log.go:172] (0xc00082e6e0) (0xc000990be0) Create stream I0129 13:22:14.404319 1031 log.go:172] (0xc00082e6e0) (0xc000990be0) Stream added, broadcasting: 1 I0129 13:22:14.421867 1031 log.go:172] (0xc00082e6e0) Reply frame received for 1 I0129 13:22:14.422750 1031 log.go:172] (0xc00082e6e0) (0xc000990000) Create stream I0129 13:22:14.422987 1031 log.go:172] (0xc00082e6e0) (0xc000990000) Stream added, broadcasting: 3 I0129 13:22:14.428392 1031 log.go:172] (0xc00082e6e0) Reply frame received for 3 I0129 13:22:14.428460 1031 log.go:172] (0xc00082e6e0) (0xc0005c8280) Create stream I0129 13:22:14.428469 1031 log.go:172] (0xc00082e6e0) (0xc0005c8280) Stream added, broadcasting: 5 I0129 13:22:14.435549 1031 log.go:172] (0xc00082e6e0) Reply frame received for 5 I0129 13:22:14.700797 1031 log.go:172] (0xc00082e6e0) Data frame received for 3 I0129 13:22:14.700946 1031 log.go:172] (0xc000990000) (3) Data frame handling I0129 13:22:14.701002 1031 log.go:172] (0xc000990000) (3) Data frame sent I0129 13:22:14.703318 1031 log.go:172] (0xc00082e6e0) Data frame received for 1 I0129 13:22:14.703356 1031 log.go:172] (0xc000990be0) (1) Data frame handling I0129 13:22:14.703381 1031 log.go:172] (0xc000990be0) (1) Data frame sent I0129 13:22:14.705129 1031 log.go:172] (0xc00082e6e0) (0xc0005c8280) Stream removed, broadcasting: 5 I0129 13:22:14.705300 1031 log.go:172] (0xc00082e6e0) (0xc000990000) Stream removed, broadcasting: 3 I0129 13:22:14.705334 1031 log.go:172] (0xc00082e6e0) (0xc000990be0) Stream removed, broadcasting: 1 I0129 13:22:14.705355 1031 log.go:172] (0xc00082e6e0) Go away received I0129 13:22:14.706655 1031 log.go:172] (0xc00082e6e0) (0xc000990be0) Stream removed, broadcasting: 1 I0129 13:22:14.706695 1031 log.go:172] (0xc00082e6e0) (0xc000990000) Stream removed, broadcasting: 3 I0129 13:22:14.706702 1031 log.go:172] (0xc00082e6e0) (0xc0005c8280) Stream removed, broadcasting: 5 command terminated with exit code 126 [] 0xc00195a270 exit status 126 true [0xc000ba8d20 0xc000ba8dd8 0xc000ba8ee0] [0xc000ba8d20 0xc000ba8dd8 0xc000ba8ee0] [0xc000ba8dc8 0xc000ba8e70] [0xba6c50 0xba6c50] 0xc00258ade0 }: Command stdout: OCI runtime exec failed: exec failed: container_linux.go:338: creating new parent process caused "container_linux.go:1897: running lstat on namespace path \"/proc/1694/ns/ipc\" caused \"lstat /proc/1694/ns/ipc: no such file or directory\"": unknown stderr: I0129 13:22:14.403340 1031 log.go:172] (0xc00082e6e0) (0xc000990be0) Create stream I0129 13:22:14.404319 1031 log.go:172] (0xc00082e6e0) (0xc000990be0) Stream added, broadcasting: 1 I0129 13:22:14.421867 1031 log.go:172] (0xc00082e6e0) Reply frame received for 1 I0129 13:22:14.422750 1031 log.go:172] (0xc00082e6e0) (0xc000990000) Create stream I0129 13:22:14.422987 1031 log.go:172] (0xc00082e6e0) (0xc000990000) Stream added, broadcasting: 3 I0129 13:22:14.428392 1031 log.go:172] (0xc00082e6e0) Reply frame received for 3 I0129 13:22:14.428460 1031 log.go:172] (0xc00082e6e0) (0xc0005c8280) Create stream I0129 13:22:14.428469 1031 log.go:172] (0xc00082e6e0) (0xc0005c8280) Stream added, broadcasting: 5 I0129 13:22:14.435549 1031 log.go:172] (0xc00082e6e0) Reply frame received for 5 I0129 13:22:14.700797 1031 log.go:172] (0xc00082e6e0) Data frame received for 3 I0129 13:22:14.700946 1031 log.go:172] (0xc000990000) (3) Data frame handling I0129 13:22:14.701002 1031 log.go:172] (0xc000990000) (3) Data frame sent I0129 13:22:14.703318 1031 log.go:172] (0xc00082e6e0) Data frame received for 1 I0129 13:22:14.703356 1031 log.go:172] (0xc000990be0) (1) Data frame handling I0129 13:22:14.703381 1031 log.go:172] (0xc000990be0) (1) Data frame sent I0129 13:22:14.705129 1031 log.go:172] (0xc00082e6e0) (0xc0005c8280) Stream removed, broadcasting: 5 I0129 13:22:14.705300 1031 log.go:172] (0xc00082e6e0) (0xc000990000) Stream removed, broadcasting: 3 I0129 13:22:14.705334 1031 log.go:172] (0xc00082e6e0) (0xc000990be0) Stream removed, broadcasting: 1 I0129 13:22:14.705355 1031 log.go:172] (0xc00082e6e0) Go away received I0129 13:22:14.706655 1031 log.go:172] (0xc00082e6e0) (0xc000990be0) Stream removed, broadcasting: 1 I0129 13:22:14.706695 1031 log.go:172] (0xc00082e6e0) (0xc000990000) Stream removed, broadcasting: 3 I0129 13:22:14.706702 1031 log.go:172] (0xc00082e6e0) (0xc0005c8280) Stream removed, broadcasting: 5 command terminated with exit code 126 error: exit status 126 Jan 29 13:22:24.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5911 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 29 13:22:25.001: INFO: rc: 1 Jan 29 13:22:25.002: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5911 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002b44f00 exit status 1 true [0xc000640db8 0xc000640df0 0xc000641080] [0xc000640db8 0xc000640df0 0xc000641080] [0xc000640de0 0xc000640ee0] [0xba6c50 0xba6c50] 0xc0028af020 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 29 13:22:35.003: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5911 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 29 13:22:35.154: INFO: rc: 1 Jan 29 13:22:35.154: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5911 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00284d8c0 exit status 1 true [0xc000a54470 0xc000a544d8 0xc000a544f0] [0xc000a54470 0xc000a544d8 0xc000a544f0] [0xc000a544b8 0xc000a544e8] [0xba6c50 0xba6c50] 0xc002d561e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 29 13:22:45.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5911 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 29 13:22:45.368: INFO: rc: 1 Jan 29 13:22:45.369: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5911 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00284d9b0 exit status 1 true [0xc000a54500 0xc000a54530 0xc000a54588] [0xc000a54500 0xc000a54530 0xc000a54588] [0xc000a54520 0xc000a54568] [0xba6c50 0xba6c50] 0xc002d56660 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 29 13:22:55.370: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5911 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 29 13:22:55.566: INFO: rc: 1 Jan 29 13:22:55.567: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5911 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00195a390 exit status 1 true [0xc000ba8f48 0xc000ba9038 0xc000ba91a0] [0xc000ba8f48 0xc000ba9038 0xc000ba91a0] [0xc000ba9010 0xc000ba90f0] [0xba6c50 0xba6c50] 0xc00258b3e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 29 13:23:05.568: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5911 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 29 13:23:05.743: INFO: rc: 1 Jan 29 13:23:05.743: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5911 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00284da70 exit status 1 true [0xc000a545a8 0xc000a545e0 0xc000a54610] [0xc000a545a8 0xc000a545e0 0xc000a54610] [0xc000a545d0 0xc000a54600] [0xba6c50 0xba6c50] 0xc002d56b40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 29 13:23:15.744: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5911 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 29 13:23:16.008: INFO: rc: 1 Jan 29 13:23:16.008: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5911 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00195a4b0 exit status 1 true [0xc000ba9250 0xc000ba92c0 0xc000ba9440] [0xc000ba9250 0xc000ba92c0 0xc000ba9440] [0xc000ba9298 0xc000ba93e8] [0xba6c50 0xba6c50] 0xc00258b7a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 29 13:23:26.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5911 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 29 13:23:26.199: INFO: rc: 1 Jan 29 13:23:26.199: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5911 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002b44ff0 exit status 1 true [0xc0006410e0 0xc000641260 0xc000641310] [0xc0006410e0 0xc000641260 0xc000641310] [0xc0006411a0 0xc0006412f8] [0xba6c50 0xba6c50] 0xc0028af320 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 29 13:23:36.199: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5911 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 29 13:23:36.372: INFO: rc: 1 Jan 29 13:23:36.373: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5911 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001bf00c0 exit status 1 true [0xc000186000 0xc0006c7038 0xc0006c7350] [0xc000186000 0xc0006c7038 0xc0006c7350] [0xc0006c6eb8 0xc0006c7260] [0xba6c50 0xba6c50] 0xc002e46240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 29 13:23:46.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5911 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 29 13:23:46.605: INFO: rc: 1 Jan 29 13:23:46.605: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5911 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001bf0180 exit status 1 true [0xc0006c7418 0xc0006c7760 0xc0006c77e0] [0xc0006c7418 0xc0006c7760 0xc0006c77e0] [0xc0006c7658 0xc0006c77b0] [0xba6c50 0xba6c50] 0xc002e46540 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 29 13:23:56.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5911 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 29 13:23:56.781: INFO: rc: 1 Jan 29 13:23:56.781: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5911 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0020fa0c0 exit status 1 true [0xc001ef0020 0xc001ef0058 0xc001ef0088] [0xc001ef0020 0xc001ef0058 0xc001ef0088] [0xc001ef0030 0xc001ef0070] [0xba6c50 0xba6c50] 0xc0026f0300 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 29 13:24:06.782: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5911 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 29 13:24:07.014: INFO: rc: 1 Jan 29 13:24:07.014: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5911 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001ff80c0 exit status 1 true [0xc000ba8020 0xc000ba82b8 0xc000ba83d0] [0xc000ba8020 0xc000ba82b8 0xc000ba83d0] [0xc000ba80b8 0xc000ba8368] [0xba6c50 0xba6c50] 0xc002a144e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 29 13:24:17.015: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5911 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 29 13:24:17.212: INFO: rc: 1 Jan 29 13:24:17.212: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5911 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001ff81e0 exit status 1 true [0xc000ba83f8 0xc000ba8480 0xc000ba8518] [0xc000ba83f8 0xc000ba8480 0xc000ba8518] [0xc000ba8440 0xc000ba84f8] [0xba6c50 0xba6c50] 0xc002a147e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 29 13:24:27.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5911 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 29 13:24:27.442: INFO: rc: 1 Jan 29 13:24:27.442: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5911 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001bf0240 exit status 1 true [0xc0006c77f8 0xc0006c7b68 0xc0006c7dd0] [0xc0006c77f8 0xc0006c7b68 0xc0006c7dd0] [0xc0006c7a88 0xc0006c7d50] [0xba6c50 0xba6c50] 0xc002e46840 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 29 13:24:37.443: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5911 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 29 13:24:37.623: INFO: rc: 1 Jan 29 13:24:37.623: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5911 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001bf0330 exit status 1 true [0xc0006c7ea0 0xc000a541d0 0xc000a54268] [0xc0006c7ea0 0xc000a541d0 0xc000a54268] [0xc000a54060 0xc000a54220] [0xba6c50 0xba6c50] 0xc002e46d20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 29 13:24:47.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5911 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 29 13:24:47.823: INFO: rc: 1 Jan 29 13:24:47.824: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5911 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001bf03f0 exit status 1 true [0xc000a542b8 0xc000a542f0 0xc000a54308] [0xc000a542b8 0xc000a542f0 0xc000a54308] [0xc000a542e0 0xc000a54300] [0xba6c50 0xba6c50] 0xc002e47740 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 29 13:24:57.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5911 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 29 13:24:58.126: INFO: rc: 1 Jan 29 13:24:58.126: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5911 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0020fa210 exit status 1 true [0xc001ef0090 0xc001ef00c8 0xc001ef00e8] [0xc001ef0090 0xc001ef00c8 0xc001ef00e8] [0xc001ef00b0 0xc001ef00d8] [0xba6c50 0xba6c50] 0xc0026f1740 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 29 13:25:08.127: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5911 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 29 13:25:08.336: INFO: rc: 1 Jan 29 13:25:08.336: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5911 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0020fa2d0 exit status 1 true [0xc001ef0108 0xc001ef0130 0xc001ef0168] [0xc001ef0108 0xc001ef0130 0xc001ef0168] [0xc001ef0128 0xc001ef0158] [0xba6c50 0xba6c50] 0xc00258a540 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 29 13:25:18.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5911 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 29 13:25:18.551: INFO: rc: 1 Jan 29 13:25:18.552: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5911 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000badb30 exit status 1 true [0xc000640048 0xc000640190 0xc000640260] [0xc000640048 0xc000640190 0xc000640260] [0xc0006400e8 0xc000640208] [0xba6c50 0xba6c50] 0xc002d56300 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 29 13:25:28.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5911 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 29 13:25:28.722: INFO: rc: 1 Jan 29 13:25:28.722: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5911 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001ff8090 exit status 1 true [0xc0006c7038 0xc0006c7350 0xc0006c7658] [0xc0006c7038 0xc0006c7350 0xc0006c7658] [0xc0006c7260 0xc0006c7510] [0xba6c50 0xba6c50] 0xc0026f0300 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 29 13:25:38.723: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5911 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 29 13:25:38.955: INFO: rc: 1 Jan 29 13:25:38.956: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5911 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001bf00f0 exit status 1 true [0xc0003ae1d8 0xc000ba8040 0xc000ba8308] [0xc0003ae1d8 0xc000ba8040 0xc000ba8308] [0xc000ba8020 0xc000ba82b8] [0xba6c50 0xba6c50] 0xc002a144e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 29 13:25:48.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5911 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 29 13:25:49.138: INFO: rc: 1 Jan 29 13:25:49.138: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5911 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001bf0210 exit status 1 true [0xc000ba8368 0xc000ba8420 0xc000ba84b8] [0xc000ba8368 0xc000ba8420 0xc000ba84b8] [0xc000ba83f8 0xc000ba8480] [0xba6c50 0xba6c50] 0xc002a147e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 29 13:25:59.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5911 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 29 13:25:59.269: INFO: rc: 1 Jan 29 13:25:59.269: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5911 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001bf0360 exit status 1 true [0xc000ba84f8 0xc000ba8550 0xc000ba8718] [0xc000ba84f8 0xc000ba8550 0xc000ba8718] [0xc000ba8530 0xc000ba86c0] [0xba6c50 0xba6c50] 0xc002a14b40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 29 13:26:09.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5911 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 29 13:26:09.504: INFO: rc: 1 Jan 29 13:26:09.504: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5911 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001bf0480 exit status 1 true [0xc000ba8748 0xc000ba88a8 0xc000ba8970] [0xc000ba8748 0xc000ba88a8 0xc000ba8970] [0xc000ba8890 0xc000ba8968] [0xba6c50 0xba6c50] 0xc002a14ea0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 29 13:26:19.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5911 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 29 13:26:19.702: INFO: rc: 1 Jan 29 13:26:19.702: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5911 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001ff8180 exit status 1 true [0xc0006c7760 0xc0006c77e0 0xc0006c7a88] [0xc0006c7760 0xc0006c77e0 0xc0006c7a88] [0xc0006c77b0 0xc0006c7800] [0xba6c50 0xba6c50] 0xc0026f1740 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 29 13:26:29.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5911 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 29 13:26:29.959: INFO: rc: 1 Jan 29 13:26:29.959: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5911 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001ff8270 exit status 1 true [0xc0006c7b68 0xc0006c7dd0 0xc000a54060] [0xc0006c7b68 0xc0006c7dd0 0xc000a54060] [0xc0006c7d50 0xc0006c7f30] [0xba6c50 0xba6c50] 0xc002e46120 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 29 13:26:39.959: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5911 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 29 13:26:40.125: INFO: rc: 1 Jan 29 13:26:40.125: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5911 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001ff8330 exit status 1 true [0xc000a541d0 0xc000a54268 0xc000a542e0] [0xc000a541d0 0xc000a54268 0xc000a542e0] [0xc000a54220 0xc000a542d0] [0xba6c50 0xba6c50] 0xc002e46420 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 29 13:26:50.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5911 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 29 13:26:50.377: INFO: rc: 1 Jan 29 13:26:50.377: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5911 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001ff8420 exit status 1 true [0xc000a542f0 0xc000a54308 0xc000a54368] [0xc000a542f0 0xc000a54308 0xc000a54368] [0xc000a54300 0xc000a54348] [0xba6c50 0xba6c50] 0xc002e46720 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 29 13:27:00.378: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5911 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 29 13:27:00.569: INFO: rc: 1 Jan 29 13:27:00.570: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5911 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001bf05a0 exit status 1 true [0xc000ba8980 0xc000ba8a40 0xc000ba8ad8] [0xc000ba8980 0xc000ba8a40 0xc000ba8ad8] [0xc000ba8a08 0xc000ba8a68] [0xba6c50 0xba6c50] 0xc002a151a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 29 13:27:10.571: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5911 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 29 13:27:10.731: INFO: rc: 1 Jan 29 13:27:10.731: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5911 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001bf06c0 exit status 1 true [0xc000ba8b28 0xc000ba8ba0 0xc000ba8c28] [0xc000ba8b28 0xc000ba8ba0 0xc000ba8c28] [0xc000ba8b68 0xc000ba8c08] [0xba6c50 0xba6c50] 0xc002a15620 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 29 13:27:20.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5911 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 29 13:27:20.912: INFO: rc: 1 Jan 29 13:27:20.912: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: Jan 29 13:27:20.912: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jan 29 13:27:20.929: INFO: Deleting all statefulset in ns statefulset-5911 Jan 29 13:27:20.932: INFO: Scaling statefulset ss to 0 Jan 29 13:27:20.940: INFO: Waiting for statefulset status.replicas updated to 0 Jan 29 13:27:20.943: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 29 13:27:20.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5911" for this suite. Jan 29 13:27:26.993: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 13:27:27.116: INFO: namespace statefulset-5911 deletion completed in 6.148428325s • [SLOW TEST:388.063 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 29 13:27:27.117: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating secret secrets-9305/secret-test-afa1fc35-952d-48f6-b8e5-6fefab959d5d STEP: Creating a pod to test consume secrets Jan 29 13:27:27.218: INFO: Waiting up to 5m0s for pod "pod-configmaps-18fdc2fb-37bf-47b9-9b62-b6517672b74b" in namespace "secrets-9305" to be "success or failure" Jan 29 13:27:27.245: INFO: Pod "pod-configmaps-18fdc2fb-37bf-47b9-9b62-b6517672b74b": Phase="Pending", Reason="", readiness=false. Elapsed: 26.956534ms Jan 29 13:27:29.259: INFO: Pod "pod-configmaps-18fdc2fb-37bf-47b9-9b62-b6517672b74b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040417421s Jan 29 13:27:31.279: INFO: Pod "pod-configmaps-18fdc2fb-37bf-47b9-9b62-b6517672b74b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060606763s Jan 29 13:27:33.300: INFO: Pod "pod-configmaps-18fdc2fb-37bf-47b9-9b62-b6517672b74b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.081526607s Jan 29 13:27:35.312: INFO: Pod "pod-configmaps-18fdc2fb-37bf-47b9-9b62-b6517672b74b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.094288134s STEP: Saw pod success Jan 29 13:27:35.313: INFO: Pod "pod-configmaps-18fdc2fb-37bf-47b9-9b62-b6517672b74b" satisfied condition "success or failure" Jan 29 13:27:35.319: INFO: Trying to get logs from node iruya-node pod pod-configmaps-18fdc2fb-37bf-47b9-9b62-b6517672b74b container env-test: STEP: delete the pod Jan 29 13:27:35.659: INFO: Waiting for pod pod-configmaps-18fdc2fb-37bf-47b9-9b62-b6517672b74b to disappear Jan 29 13:27:35.702: INFO: Pod pod-configmaps-18fdc2fb-37bf-47b9-9b62-b6517672b74b no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 29 13:27:35.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9305" for this suite. Jan 29 13:27:41.879: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 13:27:42.038: INFO: namespace secrets-9305 deletion completed in 6.314554016s • [SLOW TEST:14.921 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 29 13:27:42.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-4443375e-b2f8-46fb-a310-0840b45fa1fa STEP: Creating a pod to test consume secrets Jan 29 13:27:42.196: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9b40c028-e022-47a7-a61c-c502ece0d5c2" in namespace "projected-3969" to be "success or failure" Jan 29 13:27:42.216: INFO: Pod "pod-projected-secrets-9b40c028-e022-47a7-a61c-c502ece0d5c2": Phase="Pending", Reason="", readiness=false. Elapsed: 20.193414ms Jan 29 13:27:44.222: INFO: Pod "pod-projected-secrets-9b40c028-e022-47a7-a61c-c502ece0d5c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026217665s Jan 29 13:27:46.230: INFO: Pod "pod-projected-secrets-9b40c028-e022-47a7-a61c-c502ece0d5c2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034727766s Jan 29 13:27:48.237: INFO: Pod "pod-projected-secrets-9b40c028-e022-47a7-a61c-c502ece0d5c2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041679473s Jan 29 13:27:50.245: INFO: Pod "pod-projected-secrets-9b40c028-e022-47a7-a61c-c502ece0d5c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.048887761s STEP: Saw pod success Jan 29 13:27:50.245: INFO: Pod "pod-projected-secrets-9b40c028-e022-47a7-a61c-c502ece0d5c2" satisfied condition "success or failure" Jan 29 13:27:50.248: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-9b40c028-e022-47a7-a61c-c502ece0d5c2 container projected-secret-volume-test: STEP: delete the pod Jan 29 13:27:50.297: INFO: Waiting for pod pod-projected-secrets-9b40c028-e022-47a7-a61c-c502ece0d5c2 to disappear Jan 29 13:27:50.302: INFO: Pod pod-projected-secrets-9b40c028-e022-47a7-a61c-c502ece0d5c2 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 29 13:27:50.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3969" for this suite. Jan 29 13:27:56.344: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 13:27:56.455: INFO: namespace projected-3969 deletion completed in 6.147355203s • [SLOW TEST:14.417 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 29 13:27:56.455: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Jan 29 13:27:56.623: INFO: Waiting up to 5m0s for pod "pod-750ce59a-0acb-4b38-acb4-ce1b89b2afa8" in namespace "emptydir-3664" to be "success or failure" Jan 29 13:27:56.646: INFO: Pod "pod-750ce59a-0acb-4b38-acb4-ce1b89b2afa8": Phase="Pending", Reason="", readiness=false. Elapsed: 22.880727ms Jan 29 13:27:58.660: INFO: Pod "pod-750ce59a-0acb-4b38-acb4-ce1b89b2afa8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03635322s Jan 29 13:28:00.666: INFO: Pod "pod-750ce59a-0acb-4b38-acb4-ce1b89b2afa8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042630631s Jan 29 13:28:02.675: INFO: Pod "pod-750ce59a-0acb-4b38-acb4-ce1b89b2afa8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051285397s Jan 29 13:28:04.681: INFO: Pod "pod-750ce59a-0acb-4b38-acb4-ce1b89b2afa8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.057685603s STEP: Saw pod success Jan 29 13:28:04.681: INFO: Pod "pod-750ce59a-0acb-4b38-acb4-ce1b89b2afa8" satisfied condition "success or failure" Jan 29 13:28:04.684: INFO: Trying to get logs from node iruya-node pod pod-750ce59a-0acb-4b38-acb4-ce1b89b2afa8 container test-container: STEP: delete the pod Jan 29 13:28:04.739: INFO: Waiting for pod pod-750ce59a-0acb-4b38-acb4-ce1b89b2afa8 to disappear Jan 29 13:28:04.747: INFO: Pod pod-750ce59a-0acb-4b38-acb4-ce1b89b2afa8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 29 13:28:04.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3664" for this suite. Jan 29 13:28:10.832: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 13:28:10.968: INFO: namespace emptydir-3664 deletion completed in 6.218009729s • [SLOW TEST:14.513 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 29 13:28:10.969: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Jan 29 13:28:11.214: INFO: Waiting up to 5m0s for pod "pod-c6be67a9-1c84-47a9-a2f4-cc054f13535a" in namespace "emptydir-7688" to be "success or failure" Jan 29 13:28:11.228: INFO: Pod "pod-c6be67a9-1c84-47a9-a2f4-cc054f13535a": Phase="Pending", Reason="", readiness=false. Elapsed: 13.856592ms Jan 29 13:28:13.237: INFO: Pod "pod-c6be67a9-1c84-47a9-a2f4-cc054f13535a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023149525s Jan 29 13:28:15.256: INFO: Pod "pod-c6be67a9-1c84-47a9-a2f4-cc054f13535a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042678333s Jan 29 13:28:17.281: INFO: Pod "pod-c6be67a9-1c84-47a9-a2f4-cc054f13535a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.067574522s Jan 29 13:28:19.294: INFO: Pod "pod-c6be67a9-1c84-47a9-a2f4-cc054f13535a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.080681665s STEP: Saw pod success Jan 29 13:28:19.295: INFO: Pod "pod-c6be67a9-1c84-47a9-a2f4-cc054f13535a" satisfied condition "success or failure" Jan 29 13:28:19.304: INFO: Trying to get logs from node iruya-node pod pod-c6be67a9-1c84-47a9-a2f4-cc054f13535a container test-container: STEP: delete the pod Jan 29 13:28:19.344: INFO: Waiting for pod pod-c6be67a9-1c84-47a9-a2f4-cc054f13535a to disappear Jan 29 13:28:19.353: INFO: Pod pod-c6be67a9-1c84-47a9-a2f4-cc054f13535a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 29 13:28:19.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7688" for this suite. Jan 29 13:28:25.592: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 13:28:25.795: INFO: namespace emptydir-7688 deletion completed in 6.435930272s • [SLOW TEST:14.826 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 29 13:28:25.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Jan 29 13:28:25.922: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 29 13:28:25.945: INFO: Waiting for terminating namespaces to be deleted... Jan 29 13:28:25.952: INFO: Logging pods the kubelet thinks is on node iruya-node before test Jan 29 13:28:25.971: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded) Jan 29 13:28:25.971: INFO: Container kube-proxy ready: true, restart count 0 Jan 29 13:28:25.971: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded) Jan 29 13:28:25.971: INFO: Container weave ready: true, restart count 0 Jan 29 13:28:25.971: INFO: Container weave-npc ready: true, restart count 0 Jan 29 13:28:25.971: INFO: Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test Jan 29 13:28:26.028: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded) Jan 29 13:28:26.028: INFO: Container kube-controller-manager ready: true, restart count 19 Jan 29 13:28:26.028: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded) Jan 29 13:28:26.028: INFO: Container kube-proxy ready: true, restart count 0 Jan 29 13:28:26.028: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded) Jan 29 13:28:26.028: INFO: Container kube-apiserver ready: true, restart count 0 Jan 29 13:28:26.028: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded) Jan 29 13:28:26.028: INFO: Container kube-scheduler ready: true, restart count 13 Jan 29 13:28:26.028: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Jan 29 13:28:26.028: INFO: Container coredns ready: true, restart count 0 Jan 29 13:28:26.028: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded) Jan 29 13:28:26.028: INFO: Container etcd ready: true, restart count 0 Jan 29 13:28:26.028: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded) Jan 29 13:28:26.028: INFO: Container weave ready: true, restart count 0 Jan 29 13:28:26.028: INFO: Container weave-npc ready: true, restart count 0 Jan 29 13:28:26.028: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Jan 29 13:28:26.028: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-28b84b22-cb12-4800-9980-72d58f05c3e5 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-28b84b22-cb12-4800-9980-72d58f05c3e5 off the node iruya-node STEP: verifying the node doesn't have the label kubernetes.io/e2e-28b84b22-cb12-4800-9980-72d58f05c3e5 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 29 13:28:44.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5971" for this suite. Jan 29 13:28:58.392: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 13:28:58.507: INFO: namespace sched-pred-5971 deletion completed in 14.140152673s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:32.711 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 29 13:28:58.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jan 29 13:29:16.779: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 29 13:29:16.840: INFO: Pod pod-with-prestop-exec-hook still exists Jan 29 13:29:18.840: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 29 13:29:18.848: INFO: Pod pod-with-prestop-exec-hook still exists Jan 29 13:29:20.841: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 29 13:29:20.849: INFO: Pod pod-with-prestop-exec-hook still exists Jan 29 13:29:22.841: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 29 13:29:22.852: INFO: Pod pod-with-prestop-exec-hook still exists Jan 29 13:29:24.841: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 29 13:29:24.851: INFO: Pod pod-with-prestop-exec-hook still exists Jan 29 13:29:26.841: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 29 13:29:26.850: INFO: Pod pod-with-prestop-exec-hook still exists Jan 29 13:29:28.841: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 29 13:29:28.855: INFO: Pod pod-with-prestop-exec-hook still exists Jan 29 13:29:30.841: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 29 13:29:30.859: INFO: Pod pod-with-prestop-exec-hook still exists Jan 29 13:29:32.841: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 29 13:29:32.847: INFO: Pod pod-with-prestop-exec-hook still exists Jan 29 13:29:34.840: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 29 13:29:34.861: INFO: Pod pod-with-prestop-exec-hook still exists Jan 29 13:29:36.841: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 29 13:29:36.871: INFO: Pod pod-with-prestop-exec-hook still exists Jan 29 13:29:38.842: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 29 13:29:38.864: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 29 13:29:38.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3406" for this suite. Jan 29 13:30:00.953: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 13:30:01.062: INFO: namespace container-lifecycle-hook-3406 deletion completed in 22.130134011s • [SLOW TEST:62.554 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 29 13:30:01.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 29 13:30:01.227: INFO: (0) /api/v1/nodes/iruya-node:10250/proxy/logs/:
alternatives.log
alternatives.l... (200; 19.947152ms)
Jan 29 13:30:01.235: INFO: (1) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.403728ms)
Jan 29 13:30:01.242: INFO: (2) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.68314ms)
Jan 29 13:30:01.253: INFO: (3) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.690455ms)
Jan 29 13:30:01.258: INFO: (4) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.020067ms)
Jan 29 13:30:01.263: INFO: (5) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.914878ms)
Jan 29 13:30:01.269: INFO: (6) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.48297ms)
Jan 29 13:30:01.273: INFO: (7) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.449021ms)
Jan 29 13:30:01.278: INFO: (8) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.714091ms)
Jan 29 13:30:01.282: INFO: (9) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.215432ms)
Jan 29 13:30:01.287: INFO: (10) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.218707ms)
Jan 29 13:30:01.291: INFO: (11) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.247647ms)
Jan 29 13:30:01.309: INFO: (12) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 17.7016ms)
Jan 29 13:30:01.315: INFO: (13) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.704446ms)
Jan 29 13:30:01.322: INFO: (14) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.163571ms)
Jan 29 13:30:01.326: INFO: (15) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.792899ms)
Jan 29 13:30:01.330: INFO: (16) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.90099ms)
Jan 29 13:30:01.337: INFO: (17) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.753305ms)
Jan 29 13:30:01.342: INFO: (18) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.373882ms)
Jan 29 13:30:01.348: INFO: (19) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.131656ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 13:30:01.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-761" for this suite.
Jan 29 13:30:07.382: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 13:30:07.526: INFO: namespace proxy-761 deletion completed in 6.173980204s

• [SLOW TEST:6.464 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 13:30:07.527: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan 29 13:30:07.839: INFO: Number of nodes with available pods: 0
Jan 29 13:30:07.839: INFO: Node iruya-node is running more than one daemon pod
Jan 29 13:30:08.870: INFO: Number of nodes with available pods: 0
Jan 29 13:30:08.870: INFO: Node iruya-node is running more than one daemon pod
Jan 29 13:30:09.883: INFO: Number of nodes with available pods: 0
Jan 29 13:30:09.883: INFO: Node iruya-node is running more than one daemon pod
Jan 29 13:30:10.865: INFO: Number of nodes with available pods: 0
Jan 29 13:30:10.865: INFO: Node iruya-node is running more than one daemon pod
Jan 29 13:30:11.880: INFO: Number of nodes with available pods: 0
Jan 29 13:30:11.884: INFO: Node iruya-node is running more than one daemon pod
Jan 29 13:30:12.894: INFO: Number of nodes with available pods: 0
Jan 29 13:30:12.894: INFO: Node iruya-node is running more than one daemon pod
Jan 29 13:30:15.724: INFO: Number of nodes with available pods: 0
Jan 29 13:30:15.724: INFO: Node iruya-node is running more than one daemon pod
Jan 29 13:30:16.552: INFO: Number of nodes with available pods: 0
Jan 29 13:30:16.552: INFO: Node iruya-node is running more than one daemon pod
Jan 29 13:30:16.881: INFO: Number of nodes with available pods: 0
Jan 29 13:30:16.881: INFO: Node iruya-node is running more than one daemon pod
Jan 29 13:30:17.875: INFO: Number of nodes with available pods: 1
Jan 29 13:30:17.875: INFO: Node iruya-node is running more than one daemon pod
Jan 29 13:30:18.884: INFO: Number of nodes with available pods: 2
Jan 29 13:30:18.884: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Jan 29 13:30:19.038: INFO: Number of nodes with available pods: 1
Jan 29 13:30:19.038: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 29 13:30:20.058: INFO: Number of nodes with available pods: 1
Jan 29 13:30:20.058: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 29 13:30:21.052: INFO: Number of nodes with available pods: 1
Jan 29 13:30:21.052: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 29 13:30:22.056: INFO: Number of nodes with available pods: 1
Jan 29 13:30:22.056: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 29 13:30:23.067: INFO: Number of nodes with available pods: 1
Jan 29 13:30:23.067: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 29 13:30:24.121: INFO: Number of nodes with available pods: 1
Jan 29 13:30:24.122: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 29 13:30:25.059: INFO: Number of nodes with available pods: 1
Jan 29 13:30:25.059: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 29 13:30:26.235: INFO: Number of nodes with available pods: 1
Jan 29 13:30:26.236: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 29 13:30:27.068: INFO: Number of nodes with available pods: 1
Jan 29 13:30:27.068: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 29 13:30:28.992: INFO: Number of nodes with available pods: 1
Jan 29 13:30:28.993: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 29 13:30:29.054: INFO: Number of nodes with available pods: 1
Jan 29 13:30:29.054: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 29 13:30:30.058: INFO: Number of nodes with available pods: 1
Jan 29 13:30:30.058: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 29 13:30:31.064: INFO: Number of nodes with available pods: 1
Jan 29 13:30:31.065: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 29 13:30:32.058: INFO: Number of nodes with available pods: 2
Jan 29 13:30:32.058: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1097, will wait for the garbage collector to delete the pods
Jan 29 13:30:32.124: INFO: Deleting DaemonSet.extensions daemon-set took: 9.886534ms
Jan 29 13:30:32.525: INFO: Terminating DaemonSet.extensions daemon-set pods took: 401.188871ms
Jan 29 13:30:47.943: INFO: Number of nodes with available pods: 0
Jan 29 13:30:47.943: INFO: Number of running nodes: 0, number of available pods: 0
Jan 29 13:30:47.952: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1097/daemonsets","resourceVersion":"22316038"},"items":null}

Jan 29 13:30:47.982: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1097/pods","resourceVersion":"22316038"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 13:30:48.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-1097" for this suite.
Jan 29 13:30:54.050: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 13:30:54.140: INFO: namespace daemonsets-1097 deletion completed in 6.130861042s

• [SLOW TEST:46.613 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 13:30:54.140: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-205b9fd5-b13a-4bfb-82ef-322f62748350
STEP: Creating a pod to test consume secrets
Jan 29 13:30:54.435: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e85dc20f-5466-407e-8da8-7c9cf0e38580" in namespace "projected-3646" to be "success or failure"
Jan 29 13:30:54.450: INFO: Pod "pod-projected-secrets-e85dc20f-5466-407e-8da8-7c9cf0e38580": Phase="Pending", Reason="", readiness=false. Elapsed: 14.790875ms
Jan 29 13:30:56.469: INFO: Pod "pod-projected-secrets-e85dc20f-5466-407e-8da8-7c9cf0e38580": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033010149s
Jan 29 13:30:58.478: INFO: Pod "pod-projected-secrets-e85dc20f-5466-407e-8da8-7c9cf0e38580": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042721743s
Jan 29 13:31:00.492: INFO: Pod "pod-projected-secrets-e85dc20f-5466-407e-8da8-7c9cf0e38580": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056023125s
Jan 29 13:31:02.508: INFO: Pod "pod-projected-secrets-e85dc20f-5466-407e-8da8-7c9cf0e38580": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.072132915s
STEP: Saw pod success
Jan 29 13:31:02.508: INFO: Pod "pod-projected-secrets-e85dc20f-5466-407e-8da8-7c9cf0e38580" satisfied condition "success or failure"
Jan 29 13:31:02.515: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-e85dc20f-5466-407e-8da8-7c9cf0e38580 container projected-secret-volume-test: 
STEP: delete the pod
Jan 29 13:31:02.682: INFO: Waiting for pod pod-projected-secrets-e85dc20f-5466-407e-8da8-7c9cf0e38580 to disappear
Jan 29 13:31:02.701: INFO: Pod pod-projected-secrets-e85dc20f-5466-407e-8da8-7c9cf0e38580 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 13:31:02.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3646" for this suite.
Jan 29 13:31:08.817: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 13:31:08.968: INFO: namespace projected-3646 deletion completed in 6.235110588s

• [SLOW TEST:14.828 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 13:31:08.969: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating replication controller my-hostname-basic-fc977ece-319a-4f73-8ddc-dbe25fea81e7
Jan 29 13:31:09.102: INFO: Pod name my-hostname-basic-fc977ece-319a-4f73-8ddc-dbe25fea81e7: Found 0 pods out of 1
Jan 29 13:31:14.112: INFO: Pod name my-hostname-basic-fc977ece-319a-4f73-8ddc-dbe25fea81e7: Found 1 pods out of 1
Jan 29 13:31:14.113: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-fc977ece-319a-4f73-8ddc-dbe25fea81e7" are running
Jan 29 13:31:18.126: INFO: Pod "my-hostname-basic-fc977ece-319a-4f73-8ddc-dbe25fea81e7-xtsk5" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-29 13:31:09 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-29 13:31:09 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-fc977ece-319a-4f73-8ddc-dbe25fea81e7]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-29 13:31:09 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-fc977ece-319a-4f73-8ddc-dbe25fea81e7]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-29 13:31:09 +0000 UTC Reason: Message:}])
Jan 29 13:31:18.127: INFO: Trying to dial the pod
Jan 29 13:31:23.154: INFO: Controller my-hostname-basic-fc977ece-319a-4f73-8ddc-dbe25fea81e7: Got expected result from replica 1 [my-hostname-basic-fc977ece-319a-4f73-8ddc-dbe25fea81e7-xtsk5]: "my-hostname-basic-fc977ece-319a-4f73-8ddc-dbe25fea81e7-xtsk5", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 13:31:23.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-6030" for this suite.
Jan 29 13:31:29.188: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 13:31:29.350: INFO: namespace replication-controller-6030 deletion completed in 6.191762733s

• [SLOW TEST:20.381 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 13:31:29.351: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Jan 29 13:31:29.603: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-318,SelfLink:/api/v1/namespaces/watch-318/configmaps/e2e-watch-test-resource-version,UID:6c4e8a70-4ca7-4434-8a63-d184704a914a,ResourceVersion:22316177,Generation:0,CreationTimestamp:2020-01-29 13:31:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 29 13:31:29.604: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-318,SelfLink:/api/v1/namespaces/watch-318/configmaps/e2e-watch-test-resource-version,UID:6c4e8a70-4ca7-4434-8a63-d184704a914a,ResourceVersion:22316178,Generation:0,CreationTimestamp:2020-01-29 13:31:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 13:31:29.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-318" for this suite.
Jan 29 13:31:35.656: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 13:31:35.876: INFO: namespace watch-318 deletion completed in 6.261169029s

• [SLOW TEST:6.526 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 13:31:35.877: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jan 29 13:31:36.031: INFO: Waiting up to 5m0s for pod "downward-api-3bc0313f-3f0c-494d-b90a-0f9cb3182387" in namespace "downward-api-5591" to be "success or failure"
Jan 29 13:31:36.039: INFO: Pod "downward-api-3bc0313f-3f0c-494d-b90a-0f9cb3182387": Phase="Pending", Reason="", readiness=false. Elapsed: 8.346006ms
Jan 29 13:31:38.054: INFO: Pod "downward-api-3bc0313f-3f0c-494d-b90a-0f9cb3182387": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022984422s
Jan 29 13:31:40.060: INFO: Pod "downward-api-3bc0313f-3f0c-494d-b90a-0f9cb3182387": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029554211s
Jan 29 13:31:42.108: INFO: Pod "downward-api-3bc0313f-3f0c-494d-b90a-0f9cb3182387": Phase="Pending", Reason="", readiness=false. Elapsed: 6.077463637s
Jan 29 13:31:44.122: INFO: Pod "downward-api-3bc0313f-3f0c-494d-b90a-0f9cb3182387": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.091582141s
STEP: Saw pod success
Jan 29 13:31:44.123: INFO: Pod "downward-api-3bc0313f-3f0c-494d-b90a-0f9cb3182387" satisfied condition "success or failure"
Jan 29 13:31:44.125: INFO: Trying to get logs from node iruya-node pod downward-api-3bc0313f-3f0c-494d-b90a-0f9cb3182387 container dapi-container: 
STEP: delete the pod
Jan 29 13:31:44.237: INFO: Waiting for pod downward-api-3bc0313f-3f0c-494d-b90a-0f9cb3182387 to disappear
Jan 29 13:31:44.255: INFO: Pod downward-api-3bc0313f-3f0c-494d-b90a-0f9cb3182387 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 13:31:44.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5591" for this suite.
Jan 29 13:31:50.304: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 13:31:50.458: INFO: namespace downward-api-5591 deletion completed in 6.182846541s

• [SLOW TEST:14.581 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 13:31:50.458: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan 29 13:31:50.681: INFO: Waiting up to 5m0s for pod "pod-b8c2f951-d56e-4c82-acfc-5179cf3c8f42" in namespace "emptydir-4513" to be "success or failure"
Jan 29 13:31:50.692: INFO: Pod "pod-b8c2f951-d56e-4c82-acfc-5179cf3c8f42": Phase="Pending", Reason="", readiness=false. Elapsed: 10.536933ms
Jan 29 13:31:52.704: INFO: Pod "pod-b8c2f951-d56e-4c82-acfc-5179cf3c8f42": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022888413s
Jan 29 13:31:54.711: INFO: Pod "pod-b8c2f951-d56e-4c82-acfc-5179cf3c8f42": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030083652s
Jan 29 13:31:56.731: INFO: Pod "pod-b8c2f951-d56e-4c82-acfc-5179cf3c8f42": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050234733s
Jan 29 13:31:58.745: INFO: Pod "pod-b8c2f951-d56e-4c82-acfc-5179cf3c8f42": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.063872866s
STEP: Saw pod success
Jan 29 13:31:58.745: INFO: Pod "pod-b8c2f951-d56e-4c82-acfc-5179cf3c8f42" satisfied condition "success or failure"
Jan 29 13:31:58.753: INFO: Trying to get logs from node iruya-node pod pod-b8c2f951-d56e-4c82-acfc-5179cf3c8f42 container test-container: 
STEP: delete the pod
Jan 29 13:31:59.544: INFO: Waiting for pod pod-b8c2f951-d56e-4c82-acfc-5179cf3c8f42 to disappear
Jan 29 13:31:59.558: INFO: Pod pod-b8c2f951-d56e-4c82-acfc-5179cf3c8f42 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 13:31:59.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4513" for this suite.
Jan 29 13:32:05.685: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 13:32:05.858: INFO: namespace emptydir-4513 deletion completed in 6.246816767s

• [SLOW TEST:15.399 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 13:32:05.860: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Jan 29 13:32:14.715: INFO: Successfully updated pod "labelsupdatea2b40f98-0a0a-4b0e-b350-080112e37f89"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 13:32:16.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1101" for this suite.
Jan 29 13:32:38.959: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 13:32:39.052: INFO: namespace projected-1101 deletion completed in 22.181582304s

• [SLOW TEST:33.193 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 13:32:39.053: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 29 13:32:39.152: INFO: Creating deployment "test-recreate-deployment"
Jan 29 13:32:39.164: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Jan 29 13:32:39.236: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Jan 29 13:32:41.255: INFO: Waiting deployment "test-recreate-deployment" to complete
Jan 29 13:32:41.259: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715901559, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715901559, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715901559, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715901559, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 29 13:32:43.271: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715901559, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715901559, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715901559, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715901559, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 29 13:32:45.272: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715901559, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715901559, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715901559, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715901559, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 29 13:32:47.268: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Jan 29 13:32:47.283: INFO: Updating deployment test-recreate-deployment
Jan 29 13:32:47.283: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Jan 29 13:32:47.843: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-9483,SelfLink:/apis/apps/v1/namespaces/deployment-9483/deployments/test-recreate-deployment,UID:fc952481-7329-415c-a0b6-dd435bec9c9e,ResourceVersion:22316400,Generation:2,CreationTimestamp:2020-01-29 13:32:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-01-29 13:32:47 +0000 UTC 2020-01-29 13:32:47 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-01-29 13:32:47 +0000 UTC 2020-01-29 13:32:39 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Jan 29 13:32:47.898: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-9483,SelfLink:/apis/apps/v1/namespaces/deployment-9483/replicasets/test-recreate-deployment-5c8c9cc69d,UID:5b9f0ffc-d238-4872-9e74-76e514aeb493,ResourceVersion:22316399,Generation:1,CreationTimestamp:2020-01-29 13:32:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment fc952481-7329-415c-a0b6-dd435bec9c9e 0xc001d40c07 0xc001d40c08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 29 13:32:47.898: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Jan 29 13:32:47.899: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-9483,SelfLink:/apis/apps/v1/namespaces/deployment-9483/replicasets/test-recreate-deployment-6df85df6b9,UID:21175262-57b9-44de-8c0a-e5ab899acd25,ResourceVersion:22316389,Generation:2,CreationTimestamp:2020-01-29 13:32:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment fc952481-7329-415c-a0b6-dd435bec9c9e 0xc001d40ce7 0xc001d40ce8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 29 13:32:48.016: INFO: Pod "test-recreate-deployment-5c8c9cc69d-54flm" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-54flm,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-9483,SelfLink:/api/v1/namespaces/deployment-9483/pods/test-recreate-deployment-5c8c9cc69d-54flm,UID:9f722bfb-2861-4a96-a17a-82a56546d666,ResourceVersion:22316401,Generation:0,CreationTimestamp:2020-01-29 13:32:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 5b9f0ffc-d238-4872-9e74-76e514aeb493 0xc001d415f7 0xc001d415f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-f6j8w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f6j8w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-f6j8w true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001d41670} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001d41690}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:32:47 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:32:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:32:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:32:47 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-29 13:32:47 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 13:32:48.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-9483" for this suite.
Jan 29 13:32:54.051: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 13:32:54.168: INFO: namespace deployment-9483 deletion completed in 6.144901975s

• [SLOW TEST:15.115 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 13:32:54.169: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 13:33:54.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7903" for this suite.
Jan 29 13:34:16.429: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 13:34:16.563: INFO: namespace container-probe-7903 deletion completed in 22.193067311s

• [SLOW TEST:82.395 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 13:34:16.564: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Jan 29 13:34:26.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-ee0b2547-fc6b-47d1-b33c-d6a2cc243cb3 -c busybox-main-container --namespace=emptydir-271 -- cat /usr/share/volumeshare/shareddata.txt'
Jan 29 13:34:29.164: INFO: stderr: "I0129 13:34:28.833355    1628 log.go:172] (0xc000ad8580) (0xc000ad0a00) Create stream\nI0129 13:34:28.833508    1628 log.go:172] (0xc000ad8580) (0xc000ad0a00) Stream added, broadcasting: 1\nI0129 13:34:28.842986    1628 log.go:172] (0xc000ad8580) Reply frame received for 1\nI0129 13:34:28.843112    1628 log.go:172] (0xc000ad8580) (0xc00069e3c0) Create stream\nI0129 13:34:28.843139    1628 log.go:172] (0xc000ad8580) (0xc00069e3c0) Stream added, broadcasting: 3\nI0129 13:34:28.847782    1628 log.go:172] (0xc000ad8580) Reply frame received for 3\nI0129 13:34:28.848058    1628 log.go:172] (0xc000ad8580) (0xc000324000) Create stream\nI0129 13:34:28.848084    1628 log.go:172] (0xc000ad8580) (0xc000324000) Stream added, broadcasting: 5\nI0129 13:34:28.850798    1628 log.go:172] (0xc000ad8580) Reply frame received for 5\nI0129 13:34:28.980898    1628 log.go:172] (0xc000ad8580) Data frame received for 3\nI0129 13:34:28.981016    1628 log.go:172] (0xc00069e3c0) (3) Data frame handling\nI0129 13:34:28.981067    1628 log.go:172] (0xc00069e3c0) (3) Data frame sent\nI0129 13:34:29.145702    1628 log.go:172] (0xc000ad8580) (0xc00069e3c0) Stream removed, broadcasting: 3\nI0129 13:34:29.146815    1628 log.go:172] (0xc000ad8580) (0xc000324000) Stream removed, broadcasting: 5\nI0129 13:34:29.147607    1628 log.go:172] (0xc000ad8580) Data frame received for 1\nI0129 13:34:29.147674    1628 log.go:172] (0xc000ad0a00) (1) Data frame handling\nI0129 13:34:29.147712    1628 log.go:172] (0xc000ad0a00) (1) Data frame sent\nI0129 13:34:29.147742    1628 log.go:172] (0xc000ad8580) (0xc000ad0a00) Stream removed, broadcasting: 1\nI0129 13:34:29.147791    1628 log.go:172] (0xc000ad8580) Go away received\nI0129 13:34:29.153038    1628 log.go:172] (0xc000ad8580) (0xc000ad0a00) Stream removed, broadcasting: 1\nI0129 13:34:29.153263    1628 log.go:172] (0xc000ad8580) (0xc00069e3c0) Stream removed, broadcasting: 3\nI0129 13:34:29.153309    1628 log.go:172] (0xc000ad8580) (0xc000324000) Stream removed, broadcasting: 5\n"
Jan 29 13:34:29.164: INFO: stdout: "Hello from the busy-box sub-container\n"
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 13:34:29.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-271" for this suite.
Jan 29 13:34:35.230: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 13:34:35.424: INFO: namespace emptydir-271 deletion completed in 6.250945073s

• [SLOW TEST:18.860 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 13:34:35.425: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-projected-all-test-volume-e671f420-a2a6-402b-9d6d-800d02da6b22
STEP: Creating secret with name secret-projected-all-test-volume-2382c5b1-14e6-4c81-9d92-d636b41aa1de
STEP: Creating a pod to test Check all projections for projected volume plugin
Jan 29 13:34:35.567: INFO: Waiting up to 5m0s for pod "projected-volume-b3593dbb-2bbf-4be3-8b15-e66f33299297" in namespace "projected-3576" to be "success or failure"
Jan 29 13:34:35.574: INFO: Pod "projected-volume-b3593dbb-2bbf-4be3-8b15-e66f33299297": Phase="Pending", Reason="", readiness=false. Elapsed: 6.283786ms
Jan 29 13:34:37.582: INFO: Pod "projected-volume-b3593dbb-2bbf-4be3-8b15-e66f33299297": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014166306s
Jan 29 13:34:39.592: INFO: Pod "projected-volume-b3593dbb-2bbf-4be3-8b15-e66f33299297": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024103431s
Jan 29 13:34:41.600: INFO: Pod "projected-volume-b3593dbb-2bbf-4be3-8b15-e66f33299297": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032745502s
Jan 29 13:34:43.615: INFO: Pod "projected-volume-b3593dbb-2bbf-4be3-8b15-e66f33299297": Phase="Pending", Reason="", readiness=false. Elapsed: 8.047341494s
Jan 29 13:34:45.623: INFO: Pod "projected-volume-b3593dbb-2bbf-4be3-8b15-e66f33299297": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.055808995s
STEP: Saw pod success
Jan 29 13:34:45.623: INFO: Pod "projected-volume-b3593dbb-2bbf-4be3-8b15-e66f33299297" satisfied condition "success or failure"
Jan 29 13:34:45.628: INFO: Trying to get logs from node iruya-node pod projected-volume-b3593dbb-2bbf-4be3-8b15-e66f33299297 container projected-all-volume-test: 
STEP: delete the pod
Jan 29 13:34:45.730: INFO: Waiting for pod projected-volume-b3593dbb-2bbf-4be3-8b15-e66f33299297 to disappear
Jan 29 13:34:45.837: INFO: Pod projected-volume-b3593dbb-2bbf-4be3-8b15-e66f33299297 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 13:34:45.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3576" for this suite.
Jan 29 13:34:51.931: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 13:34:52.103: INFO: namespace projected-3576 deletion completed in 6.217054414s

• [SLOW TEST:16.677 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 13:34:52.103: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-0a6aee69-f0b7-4386-ae8f-d5a54f3fdff9
STEP: Creating a pod to test consume configMaps
Jan 29 13:34:52.359: INFO: Waiting up to 5m0s for pod "pod-configmaps-775d974a-ca77-4564-af53-18f9991d8f04" in namespace "configmap-3202" to be "success or failure"
Jan 29 13:34:52.404: INFO: Pod "pod-configmaps-775d974a-ca77-4564-af53-18f9991d8f04": Phase="Pending", Reason="", readiness=false. Elapsed: 44.95467ms
Jan 29 13:34:54.424: INFO: Pod "pod-configmaps-775d974a-ca77-4564-af53-18f9991d8f04": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064821868s
Jan 29 13:34:56.509: INFO: Pod "pod-configmaps-775d974a-ca77-4564-af53-18f9991d8f04": Phase="Pending", Reason="", readiness=false. Elapsed: 4.149589682s
Jan 29 13:34:58.529: INFO: Pod "pod-configmaps-775d974a-ca77-4564-af53-18f9991d8f04": Phase="Pending", Reason="", readiness=false. Elapsed: 6.169499724s
Jan 29 13:35:00.545: INFO: Pod "pod-configmaps-775d974a-ca77-4564-af53-18f9991d8f04": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.186052721s
STEP: Saw pod success
Jan 29 13:35:00.546: INFO: Pod "pod-configmaps-775d974a-ca77-4564-af53-18f9991d8f04" satisfied condition "success or failure"
Jan 29 13:35:00.550: INFO: Trying to get logs from node iruya-node pod pod-configmaps-775d974a-ca77-4564-af53-18f9991d8f04 container configmap-volume-test: 
STEP: delete the pod
Jan 29 13:35:00.887: INFO: Waiting for pod pod-configmaps-775d974a-ca77-4564-af53-18f9991d8f04 to disappear
Jan 29 13:35:00.898: INFO: Pod pod-configmaps-775d974a-ca77-4564-af53-18f9991d8f04 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 13:35:00.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3202" for this suite.
Jan 29 13:35:06.942: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 13:35:07.066: INFO: namespace configmap-3202 deletion completed in 6.158177748s

• [SLOW TEST:14.963 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 13:35:07.066: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-8601, will wait for the garbage collector to delete the pods
Jan 29 13:35:17.247: INFO: Deleting Job.batch foo took: 21.069906ms
Jan 29 13:35:17.548: INFO: Terminating Job.batch foo pods took: 300.699161ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 13:36:06.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-8601" for this suite.
Jan 29 13:36:12.779: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 13:36:12.933: INFO: namespace job-8601 deletion completed in 6.174339199s

• [SLOW TEST:65.866 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 13:36:12.933: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-6175
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating stateful set ss in namespace statefulset-6175
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6175
Jan 29 13:36:13.094: INFO: Found 0 stateful pods, waiting for 1
Jan 29 13:36:23.111: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Jan 29 13:36:23.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6175 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 29 13:36:23.749: INFO: stderr: "I0129 13:36:23.343643    1654 log.go:172] (0xc000138fd0) (0xc000624b40) Create stream\nI0129 13:36:23.343987    1654 log.go:172] (0xc000138fd0) (0xc000624b40) Stream added, broadcasting: 1\nI0129 13:36:23.351782    1654 log.go:172] (0xc000138fd0) Reply frame received for 1\nI0129 13:36:23.351826    1654 log.go:172] (0xc000138fd0) (0xc000624be0) Create stream\nI0129 13:36:23.351840    1654 log.go:172] (0xc000138fd0) (0xc000624be0) Stream added, broadcasting: 3\nI0129 13:36:23.355089    1654 log.go:172] (0xc000138fd0) Reply frame received for 3\nI0129 13:36:23.355275    1654 log.go:172] (0xc000138fd0) (0xc000624c80) Create stream\nI0129 13:36:23.355293    1654 log.go:172] (0xc000138fd0) (0xc000624c80) Stream added, broadcasting: 5\nI0129 13:36:23.357307    1654 log.go:172] (0xc000138fd0) Reply frame received for 5\nI0129 13:36:23.485224    1654 log.go:172] (0xc000138fd0) Data frame received for 5\nI0129 13:36:23.485376    1654 log.go:172] (0xc000624c80) (5) Data frame handling\nI0129 13:36:23.485408    1654 log.go:172] (0xc000624c80) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0129 13:36:23.535514    1654 log.go:172] (0xc000138fd0) Data frame received for 3\nI0129 13:36:23.535645    1654 log.go:172] (0xc000624be0) (3) Data frame handling\nI0129 13:36:23.535682    1654 log.go:172] (0xc000624be0) (3) Data frame sent\nI0129 13:36:23.728265    1654 log.go:172] (0xc000138fd0) (0xc000624be0) Stream removed, broadcasting: 3\nI0129 13:36:23.728936    1654 log.go:172] (0xc000138fd0) Data frame received for 1\nI0129 13:36:23.729071    1654 log.go:172] (0xc000138fd0) (0xc000624c80) Stream removed, broadcasting: 5\nI0129 13:36:23.729503    1654 log.go:172] (0xc000624b40) (1) Data frame handling\nI0129 13:36:23.729745    1654 log.go:172] (0xc000624b40) (1) Data frame sent\nI0129 13:36:23.730086    1654 log.go:172] (0xc000138fd0) (0xc000624b40) Stream removed, broadcasting: 1\nI0129 13:36:23.730248    1654 log.go:172] (0xc000138fd0) Go away received\nI0129 13:36:23.732947    1654 log.go:172] (0xc000138fd0) (0xc000624b40) Stream removed, broadcasting: 1\nI0129 13:36:23.732996    1654 log.go:172] (0xc000138fd0) (0xc000624be0) Stream removed, broadcasting: 3\nI0129 13:36:23.733016    1654 log.go:172] (0xc000138fd0) (0xc000624c80) Stream removed, broadcasting: 5\n"
Jan 29 13:36:23.749: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 29 13:36:23.749: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 29 13:36:23.757: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jan 29 13:36:33.775: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 29 13:36:33.775: INFO: Waiting for statefulset status.replicas updated to 0
Jan 29 13:36:33.822: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Jan 29 13:36:33.822: INFO: ss-0  iruya-node  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:13 +0000 UTC  }]
Jan 29 13:36:33.822: INFO: 
Jan 29 13:36:33.822: INFO: StatefulSet ss has not reached scale 3, at 1
Jan 29 13:36:35.185: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.984841663s
Jan 29 13:36:36.446: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.621910517s
Jan 29 13:36:37.935: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.361174279s
Jan 29 13:36:38.953: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.871982499s
Jan 29 13:36:40.134: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.854061403s
Jan 29 13:36:41.144: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.672755736s
Jan 29 13:36:42.154: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.663616946s
Jan 29 13:36:43.168: INFO: Verifying statefulset ss doesn't scale past 3 for another 652.836026ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6175
Jan 29 13:36:44.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6175 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 29 13:36:44.798: INFO: stderr: "I0129 13:36:44.366170    1674 log.go:172] (0xc0008c82c0) (0xc00091c5a0) Create stream\nI0129 13:36:44.366497    1674 log.go:172] (0xc0008c82c0) (0xc00091c5a0) Stream added, broadcasting: 1\nI0129 13:36:44.373687    1674 log.go:172] (0xc0008c82c0) Reply frame received for 1\nI0129 13:36:44.373734    1674 log.go:172] (0xc0008c82c0) (0xc00091c6e0) Create stream\nI0129 13:36:44.373742    1674 log.go:172] (0xc0008c82c0) (0xc00091c6e0) Stream added, broadcasting: 3\nI0129 13:36:44.375173    1674 log.go:172] (0xc0008c82c0) Reply frame received for 3\nI0129 13:36:44.375211    1674 log.go:172] (0xc0008c82c0) (0xc00091c780) Create stream\nI0129 13:36:44.375224    1674 log.go:172] (0xc0008c82c0) (0xc00091c780) Stream added, broadcasting: 5\nI0129 13:36:44.376832    1674 log.go:172] (0xc0008c82c0) Reply frame received for 5\nI0129 13:36:44.498280    1674 log.go:172] (0xc0008c82c0) Data frame received for 5\nI0129 13:36:44.498510    1674 log.go:172] (0xc00091c780) (5) Data frame handling\nI0129 13:36:44.498585    1674 log.go:172] (0xc00091c780) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0129 13:36:44.498603    1674 log.go:172] (0xc0008c82c0) Data frame received for 3\nI0129 13:36:44.498690    1674 log.go:172] (0xc00091c6e0) (3) Data frame handling\nI0129 13:36:44.498723    1674 log.go:172] (0xc00091c6e0) (3) Data frame sent\nI0129 13:36:44.781410    1674 log.go:172] (0xc0008c82c0) (0xc00091c780) Stream removed, broadcasting: 5\nI0129 13:36:44.781623    1674 log.go:172] (0xc0008c82c0) Data frame received for 1\nI0129 13:36:44.781663    1674 log.go:172] (0xc00091c5a0) (1) Data frame handling\nI0129 13:36:44.781695    1674 log.go:172] (0xc00091c5a0) (1) Data frame sent\nI0129 13:36:44.782083    1674 log.go:172] (0xc0008c82c0) (0xc00091c6e0) Stream removed, broadcasting: 3\nI0129 13:36:44.782472    1674 log.go:172] (0xc0008c82c0) (0xc00091c5a0) Stream removed, broadcasting: 1\nI0129 13:36:44.782500    1674 log.go:172] (0xc0008c82c0) Go away received\nI0129 13:36:44.785060    1674 log.go:172] (0xc0008c82c0) (0xc00091c5a0) Stream removed, broadcasting: 1\nI0129 13:36:44.785085    1674 log.go:172] (0xc0008c82c0) (0xc00091c6e0) Stream removed, broadcasting: 3\nI0129 13:36:44.785099    1674 log.go:172] (0xc0008c82c0) (0xc00091c780) Stream removed, broadcasting: 5\n"
Jan 29 13:36:44.798: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 29 13:36:44.798: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 29 13:36:44.798: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6175 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 29 13:36:45.304: INFO: stderr: "I0129 13:36:45.103758    1692 log.go:172] (0xc0007d2790) (0xc000752960) Create stream\nI0129 13:36:45.104146    1692 log.go:172] (0xc0007d2790) (0xc000752960) Stream added, broadcasting: 1\nI0129 13:36:45.120519    1692 log.go:172] (0xc0007d2790) Reply frame received for 1\nI0129 13:36:45.120648    1692 log.go:172] (0xc0007d2790) (0xc000752000) Create stream\nI0129 13:36:45.120678    1692 log.go:172] (0xc0007d2790) (0xc000752000) Stream added, broadcasting: 3\nI0129 13:36:45.122430    1692 log.go:172] (0xc0007d2790) Reply frame received for 3\nI0129 13:36:45.122461    1692 log.go:172] (0xc0007d2790) (0xc0006943c0) Create stream\nI0129 13:36:45.122470    1692 log.go:172] (0xc0007d2790) (0xc0006943c0) Stream added, broadcasting: 5\nI0129 13:36:45.123587    1692 log.go:172] (0xc0007d2790) Reply frame received for 5\nI0129 13:36:45.205240    1692 log.go:172] (0xc0007d2790) Data frame received for 3\nI0129 13:36:45.205728    1692 log.go:172] (0xc000752000) (3) Data frame handling\nI0129 13:36:45.205881    1692 log.go:172] (0xc0007d2790) Data frame received for 5\nI0129 13:36:45.205980    1692 log.go:172] (0xc0006943c0) (5) Data frame handling\nI0129 13:36:45.206027    1692 log.go:172] (0xc0006943c0) (5) Data frame sent\nI0129 13:36:45.206145    1692 log.go:172] (0xc000752000) (3) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0129 13:36:45.292954    1692 log.go:172] (0xc0007d2790) (0xc000752000) Stream removed, broadcasting: 3\nI0129 13:36:45.293363    1692 log.go:172] (0xc0007d2790) Data frame received for 1\nI0129 13:36:45.293503    1692 log.go:172] (0xc000752960) (1) Data frame handling\nI0129 13:36:45.293595    1692 log.go:172] (0xc000752960) (1) Data frame sent\nI0129 13:36:45.293879    1692 log.go:172] (0xc0007d2790) (0xc0006943c0) Stream removed, broadcasting: 5\nI0129 13:36:45.294009    1692 log.go:172] (0xc0007d2790) (0xc000752960) Stream removed, broadcasting: 1\nI0129 13:36:45.294047    1692 log.go:172] (0xc0007d2790) Go away received\nI0129 13:36:45.295015    1692 log.go:172] (0xc0007d2790) (0xc000752960) Stream removed, broadcasting: 1\nI0129 13:36:45.295034    1692 log.go:172] (0xc0007d2790) (0xc000752000) Stream removed, broadcasting: 3\nI0129 13:36:45.295041    1692 log.go:172] (0xc0007d2790) (0xc0006943c0) Stream removed, broadcasting: 5\n"
Jan 29 13:36:45.305: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 29 13:36:45.305: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 29 13:36:45.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6175 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 29 13:36:46.104: INFO: stderr: "I0129 13:36:45.525466    1712 log.go:172] (0xc000ab6420) (0xc000a90780) Create stream\nI0129 13:36:45.525954    1712 log.go:172] (0xc000ab6420) (0xc000a90780) Stream added, broadcasting: 1\nI0129 13:36:45.546030    1712 log.go:172] (0xc000ab6420) Reply frame received for 1\nI0129 13:36:45.546227    1712 log.go:172] (0xc000ab6420) (0xc000a90000) Create stream\nI0129 13:36:45.546254    1712 log.go:172] (0xc000ab6420) (0xc000a90000) Stream added, broadcasting: 3\nI0129 13:36:45.549096    1712 log.go:172] (0xc000ab6420) Reply frame received for 3\nI0129 13:36:45.549126    1712 log.go:172] (0xc000ab6420) (0xc000a900a0) Create stream\nI0129 13:36:45.549134    1712 log.go:172] (0xc000ab6420) (0xc000a900a0) Stream added, broadcasting: 5\nI0129 13:36:45.551231    1712 log.go:172] (0xc000ab6420) Reply frame received for 5\nI0129 13:36:45.902481    1712 log.go:172] (0xc000ab6420) Data frame received for 5\nI0129 13:36:45.902778    1712 log.go:172] (0xc000a900a0) (5) Data frame handling\nI0129 13:36:45.902815    1712 log.go:172] (0xc000a900a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0129 13:36:45.902938    1712 log.go:172] (0xc000ab6420) Data frame received for 3\nI0129 13:36:45.903020    1712 log.go:172] (0xc000a90000) (3) Data frame handling\nI0129 13:36:45.903075    1712 log.go:172] (0xc000a90000) (3) Data frame sent\nI0129 13:36:46.091160    1712 log.go:172] (0xc000ab6420) Data frame received for 1\nI0129 13:36:46.091423    1712 log.go:172] (0xc000ab6420) (0xc000a90000) Stream removed, broadcasting: 3\nI0129 13:36:46.091550    1712 log.go:172] (0xc000a90780) (1) Data frame handling\nI0129 13:36:46.091578    1712 log.go:172] (0xc000a90780) (1) Data frame sent\nI0129 13:36:46.092144    1712 log.go:172] (0xc000ab6420) (0xc000a900a0) Stream removed, broadcasting: 5\nI0129 13:36:46.092494    1712 log.go:172] (0xc000ab6420) (0xc000a90780) Stream removed, broadcasting: 1\nI0129 13:36:46.092529    1712 log.go:172] (0xc000ab6420) Go away received\nI0129 13:36:46.094150    1712 log.go:172] (0xc000ab6420) (0xc000a90780) Stream removed, broadcasting: 1\nI0129 13:36:46.094210    1712 log.go:172] (0xc000ab6420) (0xc000a90000) Stream removed, broadcasting: 3\nI0129 13:36:46.094221    1712 log.go:172] (0xc000ab6420) (0xc000a900a0) Stream removed, broadcasting: 5\n"
Jan 29 13:36:46.105: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 29 13:36:46.105: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 29 13:36:46.123: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 29 13:36:46.123: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 29 13:36:46.123: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Jan 29 13:36:46.128: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6175 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 29 13:36:46.751: INFO: stderr: "I0129 13:36:46.304134    1732 log.go:172] (0xc0001168f0) (0xc000788be0) Create stream\nI0129 13:36:46.304421    1732 log.go:172] (0xc0001168f0) (0xc000788be0) Stream added, broadcasting: 1\nI0129 13:36:46.312610    1732 log.go:172] (0xc0001168f0) Reply frame received for 1\nI0129 13:36:46.312666    1732 log.go:172] (0xc0001168f0) (0xc000788c80) Create stream\nI0129 13:36:46.312672    1732 log.go:172] (0xc0001168f0) (0xc000788c80) Stream added, broadcasting: 3\nI0129 13:36:46.314720    1732 log.go:172] (0xc0001168f0) Reply frame received for 3\nI0129 13:36:46.314827    1732 log.go:172] (0xc0001168f0) (0xc0007c2c80) Create stream\nI0129 13:36:46.314859    1732 log.go:172] (0xc0001168f0) (0xc0007c2c80) Stream added, broadcasting: 5\nI0129 13:36:46.316178    1732 log.go:172] (0xc0001168f0) Reply frame received for 5\nI0129 13:36:46.407579    1732 log.go:172] (0xc0001168f0) Data frame received for 5\nI0129 13:36:46.407649    1732 log.go:172] (0xc0007c2c80) (5) Data frame handling\nI0129 13:36:46.407677    1732 log.go:172] (0xc0007c2c80) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0129 13:36:46.408922    1732 log.go:172] (0xc0001168f0) Data frame received for 3\nI0129 13:36:46.408942    1732 log.go:172] (0xc000788c80) (3) Data frame handling\nI0129 13:36:46.408962    1732 log.go:172] (0xc000788c80) (3) Data frame sent\nI0129 13:36:46.729405    1732 log.go:172] (0xc0001168f0) (0xc000788c80) Stream removed, broadcasting: 3\nI0129 13:36:46.729713    1732 log.go:172] (0xc0001168f0) Data frame received for 1\nI0129 13:36:46.729778    1732 log.go:172] (0xc000788be0) (1) Data frame handling\nI0129 13:36:46.729816    1732 log.go:172] (0xc000788be0) (1) Data frame sent\nI0129 13:36:46.729960    1732 log.go:172] (0xc0001168f0) (0xc000788be0) Stream removed, broadcasting: 1\nI0129 13:36:46.731890    1732 log.go:172] (0xc0001168f0) (0xc0007c2c80) Stream removed, broadcasting: 5\nI0129 13:36:46.732016    1732 log.go:172] (0xc0001168f0) Go away received\nI0129 13:36:46.734325    1732 log.go:172] (0xc0001168f0) (0xc000788be0) Stream removed, broadcasting: 1\nI0129 13:36:46.735084    1732 log.go:172] (0xc0001168f0) (0xc000788c80) Stream removed, broadcasting: 3\nI0129 13:36:46.735191    1732 log.go:172] (0xc0001168f0) (0xc0007c2c80) Stream removed, broadcasting: 5\n"
Jan 29 13:36:46.751: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 29 13:36:46.751: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 29 13:36:46.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6175 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 29 13:36:47.159: INFO: stderr: "I0129 13:36:46.967407    1752 log.go:172] (0xc00069c420) (0xc000600a00) Create stream\nI0129 13:36:46.967592    1752 log.go:172] (0xc00069c420) (0xc000600a00) Stream added, broadcasting: 1\nI0129 13:36:46.971051    1752 log.go:172] (0xc00069c420) Reply frame received for 1\nI0129 13:36:46.971116    1752 log.go:172] (0xc00069c420) (0xc000416000) Create stream\nI0129 13:36:46.971126    1752 log.go:172] (0xc00069c420) (0xc000416000) Stream added, broadcasting: 3\nI0129 13:36:46.972009    1752 log.go:172] (0xc00069c420) Reply frame received for 3\nI0129 13:36:46.972031    1752 log.go:172] (0xc00069c420) (0xc0002dc000) Create stream\nI0129 13:36:46.972037    1752 log.go:172] (0xc00069c420) (0xc0002dc000) Stream added, broadcasting: 5\nI0129 13:36:46.973045    1752 log.go:172] (0xc00069c420) Reply frame received for 5\nI0129 13:36:47.054221    1752 log.go:172] (0xc00069c420) Data frame received for 5\nI0129 13:36:47.054316    1752 log.go:172] (0xc0002dc000) (5) Data frame handling\nI0129 13:36:47.054333    1752 log.go:172] (0xc0002dc000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0129 13:36:47.091180    1752 log.go:172] (0xc00069c420) Data frame received for 3\nI0129 13:36:47.091205    1752 log.go:172] (0xc000416000) (3) Data frame handling\nI0129 13:36:47.091216    1752 log.go:172] (0xc000416000) (3) Data frame sent\nI0129 13:36:47.152737    1752 log.go:172] (0xc00069c420) (0xc000416000) Stream removed, broadcasting: 3\nI0129 13:36:47.152996    1752 log.go:172] (0xc00069c420) Data frame received for 1\nI0129 13:36:47.153109    1752 log.go:172] (0xc00069c420) (0xc0002dc000) Stream removed, broadcasting: 5\nI0129 13:36:47.153164    1752 log.go:172] (0xc000600a00) (1) Data frame handling\nI0129 13:36:47.153192    1752 log.go:172] (0xc000600a00) (1) Data frame sent\nI0129 13:36:47.153213    1752 log.go:172] (0xc00069c420) (0xc000600a00) Stream removed, broadcasting: 1\nI0129 13:36:47.153237    1752 log.go:172] (0xc00069c420) Go away received\nI0129 13:36:47.154749    1752 log.go:172] (0xc00069c420) (0xc000600a00) Stream removed, broadcasting: 1\nI0129 13:36:47.154764    1752 log.go:172] (0xc00069c420) (0xc000416000) Stream removed, broadcasting: 3\nI0129 13:36:47.154770    1752 log.go:172] (0xc00069c420) (0xc0002dc000) Stream removed, broadcasting: 5\n"
Jan 29 13:36:47.159: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 29 13:36:47.159: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 29 13:36:47.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6175 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 29 13:36:47.639: INFO: stderr: "I0129 13:36:47.328253    1767 log.go:172] (0xc0009de420) (0xc0005866e0) Create stream\nI0129 13:36:47.328493    1767 log.go:172] (0xc0009de420) (0xc0005866e0) Stream added, broadcasting: 1\nI0129 13:36:47.346414    1767 log.go:172] (0xc0009de420) Reply frame received for 1\nI0129 13:36:47.346526    1767 log.go:172] (0xc0009de420) (0xc000586000) Create stream\nI0129 13:36:47.346542    1767 log.go:172] (0xc0009de420) (0xc000586000) Stream added, broadcasting: 3\nI0129 13:36:47.348884    1767 log.go:172] (0xc0009de420) Reply frame received for 3\nI0129 13:36:47.348996    1767 log.go:172] (0xc0009de420) (0xc000340280) Create stream\nI0129 13:36:47.349008    1767 log.go:172] (0xc0009de420) (0xc000340280) Stream added, broadcasting: 5\nI0129 13:36:47.350900    1767 log.go:172] (0xc0009de420) Reply frame received for 5\nI0129 13:36:47.451377    1767 log.go:172] (0xc0009de420) Data frame received for 5\nI0129 13:36:47.451567    1767 log.go:172] (0xc000340280) (5) Data frame handling\nI0129 13:36:47.451603    1767 log.go:172] (0xc000340280) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0129 13:36:47.494223    1767 log.go:172] (0xc0009de420) Data frame received for 3\nI0129 13:36:47.494320    1767 log.go:172] (0xc000586000) (3) Data frame handling\nI0129 13:36:47.494358    1767 log.go:172] (0xc000586000) (3) Data frame sent\nI0129 13:36:47.614382    1767 log.go:172] (0xc0009de420) Data frame received for 1\nI0129 13:36:47.614543    1767 log.go:172] (0xc0005866e0) (1) Data frame handling\nI0129 13:36:47.614654    1767 log.go:172] (0xc0005866e0) (1) Data frame sent\nI0129 13:36:47.615134    1767 log.go:172] (0xc0009de420) (0xc000340280) Stream removed, broadcasting: 5\nI0129 13:36:47.615993    1767 log.go:172] (0xc0009de420) (0xc0005866e0) Stream removed, broadcasting: 1\nI0129 13:36:47.616966    1767 log.go:172] (0xc0009de420) (0xc000586000) Stream removed, broadcasting: 3\nI0129 13:36:47.617059    1767 log.go:172] (0xc0009de420) Go away received\nI0129 13:36:47.618543    1767 log.go:172] (0xc0009de420) (0xc0005866e0) Stream removed, broadcasting: 1\nI0129 13:36:47.618603    1767 log.go:172] (0xc0009de420) (0xc000586000) Stream removed, broadcasting: 3\nI0129 13:36:47.618616    1767 log.go:172] (0xc0009de420) (0xc000340280) Stream removed, broadcasting: 5\n"
Jan 29 13:36:47.639: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 29 13:36:47.639: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 29 13:36:47.639: INFO: Waiting for statefulset status.replicas updated to 0
Jan 29 13:36:47.665: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Jan 29 13:36:57.683: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 29 13:36:57.683: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan 29 13:36:57.683: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan 29 13:36:57.748: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 29 13:36:57.748: INFO: ss-0  iruya-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:13 +0000 UTC  }]
Jan 29 13:36:57.748: INFO: ss-1  iruya-server-sfge57q7djm7  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:33 +0000 UTC  }]
Jan 29 13:36:57.748: INFO: ss-2  iruya-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:33 +0000 UTC  }]
Jan 29 13:36:57.748: INFO: 
Jan 29 13:36:57.748: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 29 13:36:59.746: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 29 13:36:59.746: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:13 +0000 UTC  }]
Jan 29 13:36:59.746: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:33 +0000 UTC  }]
Jan 29 13:36:59.746: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:33 +0000 UTC  }]
Jan 29 13:36:59.746: INFO: 
Jan 29 13:36:59.746: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 29 13:37:00.759: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 29 13:37:00.759: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:13 +0000 UTC  }]
Jan 29 13:37:00.759: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:33 +0000 UTC  }]
Jan 29 13:37:00.759: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:33 +0000 UTC  }]
Jan 29 13:37:00.759: INFO: 
Jan 29 13:37:00.759: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 29 13:37:01.780: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 29 13:37:01.780: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:13 +0000 UTC  }]
Jan 29 13:37:01.780: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:33 +0000 UTC  }]
Jan 29 13:37:01.780: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:33 +0000 UTC  }]
Jan 29 13:37:01.780: INFO: 
Jan 29 13:37:01.780: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 29 13:37:02.792: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 29 13:37:02.792: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:13 +0000 UTC  }]
Jan 29 13:37:02.792: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:33 +0000 UTC  }]
Jan 29 13:37:02.792: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:33 +0000 UTC  }]
Jan 29 13:37:02.792: INFO: 
Jan 29 13:37:02.792: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 29 13:37:03.809: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 29 13:37:03.809: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:13 +0000 UTC  }]
Jan 29 13:37:03.809: INFO: ss-1  iruya-server-sfge57q7djm7  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:33 +0000 UTC  }]
Jan 29 13:37:03.809: INFO: ss-2  iruya-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:33 +0000 UTC  }]
Jan 29 13:37:03.810: INFO: 
Jan 29 13:37:03.810: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 29 13:37:04.830: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Jan 29 13:37:04.830: INFO: ss-0  iruya-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:13 +0000 UTC  }]
Jan 29 13:37:04.830: INFO: 
Jan 29 13:37:04.830: INFO: StatefulSet ss has not reached scale 0, at 1
Jan 29 13:37:05.844: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Jan 29 13:37:05.844: INFO: ss-0  iruya-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:13 +0000 UTC  }]
Jan 29 13:37:05.844: INFO: 
Jan 29 13:37:05.844: INFO: StatefulSet ss has not reached scale 0, at 1
Jan 29 13:37:06.861: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Jan 29 13:37:06.861: INFO: ss-0  iruya-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:36:13 +0000 UTC  }]
Jan 29 13:37:06.861: INFO: 
Jan 29 13:37:06.861: INFO: StatefulSet ss has not reached scale 0, at 1
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6175
Jan 29 13:37:07.880: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6175 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 29 13:37:08.140: INFO: rc: 1
Jan 29 13:37:08.140: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6175 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc001bf1320 exit status 1   true [0xc002a70818 0xc002a70830 0xc002a70848] [0xc002a70818 0xc002a70830 0xc002a70848] [0xc002a70828 0xc002a70840] [0xba6c50 0xba6c50] 0xc002156000 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1
Jan 29 13:37:18.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6175 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 29 13:37:18.365: INFO: rc: 1
Jan 29 13:37:18.366: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6175 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001bf13e0 exit status 1   true [0xc002a70850 0xc002a70870 0xc002a70888] [0xc002a70850 0xc002a70870 0xc002a70888] [0xc002a70860 0xc002a70880] [0xba6c50 0xba6c50] 0xc0021565a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 29 13:37:28.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6175 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 29 13:37:28.651: INFO: rc: 1
Jan 29 13:37:28.651: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6175 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0021bc9f0 exit status 1   true [0xc002bba098 0xc002bba0b0 0xc002bba0c8] [0xc002bba098 0xc002bba0b0 0xc002bba0c8] [0xc002bba0a8 0xc002bba0c0] [0xba6c50 0xba6c50] 0xc0025208a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 29 13:37:38.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6175 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 29 13:37:38.758: INFO: rc: 1
Jan 29 13:37:38.759: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6175 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002366090 exit status 1   true [0xc000ba8040 0xc000ba8308 0xc000ba83f8] [0xc000ba8040 0xc000ba8308 0xc000ba83f8] [0xc000ba82b8 0xc000ba83d0] [0xba6c50 0xba6c50] 0xc002451a40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 29 13:37:48.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6175 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 29 13:37:48.897: INFO: rc: 1
Jan 29 13:37:48.897: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6175 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002cd2090 exit status 1   true [0xc000640048 0xc000640190 0xc000640260] [0xc000640048 0xc000640190 0xc000640260] [0xc0006400e8 0xc000640208] [0xba6c50 0xba6c50] 0xc0022d8de0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 29 13:37:58.898: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6175 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 29 13:37:59.186: INFO: rc: 1
Jan 29 13:37:59.186: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6175 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002cd2150 exit status 1   true [0xc000640270 0xc0006404a8 0xc0006405b0] [0xc000640270 0xc0006404a8 0xc0006405b0] [0xc0006403d8 0xc0006404e8] [0xba6c50 0xba6c50] 0xc0022d9d40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 29 13:38:09.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6175 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 29 13:38:09.376: INFO: rc: 1
Jan 29 13:38:09.377: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6175 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0019fa090 exit status 1   true [0xc0006c6eb8 0xc0006c7260 0xc0006c7510] [0xc0006c6eb8 0xc0006c7260 0xc0006c7510] [0xc0006c7170 0xc0006c7418] [0xba6c50 0xba6c50] 0xc00138eea0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 29 13:38:19.377: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6175 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 29 13:38:19.583: INFO: rc: 1
Jan 29 13:38:19.583: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6175 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002cd2240 exit status 1   true [0xc000640618 0xc000640740 0xc000640960] [0xc000640618 0xc000640740 0xc000640960] [0xc0006406f8 0xc0006408f8] [0xba6c50 0xba6c50] 0xc001583140 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 29 13:38:29.584: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6175 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 29 13:38:29.798: INFO: rc: 1
Jan 29 13:38:29.798: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6175 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001ff80c0 exit status 1   true [0xc0003ae1d8 0xc001cda008 0xc001cda020] [0xc0003ae1d8 0xc001cda008 0xc001cda020] [0xc001cda000 0xc001cda018] [0xba6c50 0xba6c50] 0xc001e239e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 29 13:38:39.798: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6175 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 29 13:38:40.042: INFO: rc: 1
Jan 29 13:38:40.043: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6175 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0023661b0 exit status 1   true [0xc000ba8420 0xc000ba84b8 0xc000ba8530] [0xc000ba8420 0xc000ba84b8 0xc000ba8530] [0xc000ba8480 0xc000ba8518] [0xba6c50 0xba6c50] 0xc0014eb200 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 29 13:38:50.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6175 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 29 13:38:50.216: INFO: rc: 1
Jan 29 13:38:50.216: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6175 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002366270 exit status 1   true [0xc000ba8550 0xc000ba8718 0xc000ba88a8] [0xc000ba8550 0xc000ba8718 0xc000ba88a8] [0xc000ba86c0 0xc000ba87d0] [0xba6c50 0xba6c50] 0xc00235c7e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 29 13:39:00.216: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6175 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 29 13:39:00.411: INFO: rc: 1
Jan 29 13:39:00.411: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6175 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0019fa210 exit status 1   true [0xc0006c7658 0xc0006c77b0 0xc0006c7800] [0xc0006c7658 0xc0006c77b0 0xc0006c7800] [0xc0006c7778 0xc0006c77f8] [0xba6c50 0xba6c50] 0xc001f1d500 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 29 13:39:10.412: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6175 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 29 13:39:10.637: INFO: rc: 1
Jan 29 13:39:10.637: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6175 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002366360 exit status 1   true [0xc000ba8960 0xc000ba8980 0xc000ba8a40] [0xc000ba8960 0xc000ba8980 0xc000ba8a40] [0xc000ba8970 0xc000ba8a08] [0xba6c50 0xba6c50] 0xc0018a37a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 29 13:39:20.638: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6175 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 29 13:39:20.911: INFO: rc: 1
Jan 29 13:39:20.911: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6175 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0019fa300 exit status 1   true [0xc0006c7a88 0xc0006c7d50 0xc0006c7f30] [0xc0006c7a88 0xc0006c7d50 0xc0006c7f30] [0xc0006c7c50 0xc0006c7ea0] [0xba6c50 0xba6c50] 0xc001d39140 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 29 13:39:30.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6175 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 29 13:39:31.056: INFO: rc: 1
Jan 29 13:39:31.057: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6175 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0019fa3c0 exit status 1   true [0xc002bba000 0xc002bba018 0xc002bba030] [0xc002bba000 0xc002bba018 0xc002bba030] [0xc002bba010 0xc002bba028] [0xba6c50 0xba6c50] 0xc001c4a060 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 29 13:39:41.057: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6175 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 29 13:39:41.257: INFO: rc: 1
Jan 29 13:39:41.258: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6175 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002cd20c0 exit status 1   true [0xc0003ae1d8 0xc0006c7170 0xc0006c7418] [0xc0003ae1d8 0xc0006c7170 0xc0006c7418] [0xc0006c7038 0xc0006c7350] [0xba6c50 0xba6c50] 0xc001d38720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 29 13:39:51.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6175 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 29 13:39:51.449: INFO: rc: 1
Jan 29 13:39:51.450: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6175 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0019fa0c0 exit status 1   true [0xc000640048 0xc000640190 0xc000640260] [0xc000640048 0xc000640190 0xc000640260] [0xc0006400e8 0xc000640208] [0xba6c50 0xba6c50] 0xc0019ae7e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 29 13:40:01.450: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6175 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 29 13:40:01.636: INFO: rc: 1
Jan 29 13:40:01.637: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6175 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0023660c0 exit status 1   true [0xc002bba000 0xc002bba018 0xc002bba030] [0xc002bba000 0xc002bba018 0xc002bba030] [0xc002bba010 0xc002bba028] [0xba6c50 0xba6c50] 0xc00235c2a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 29 13:40:11.637: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6175 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 29 13:40:11.885: INFO: rc: 1
Jan 29 13:40:11.886: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6175 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0023661e0 exit status 1   true [0xc002bba038 0xc002bba050 0xc002bba068] [0xc002bba038 0xc002bba050 0xc002bba068] [0xc002bba048 0xc002bba060] [0xba6c50 0xba6c50] 0xc00235d7a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 29 13:40:21.887: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6175 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 29 13:40:22.096: INFO: rc: 1
Jan 29 13:40:22.096: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6175 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0023662d0 exit status 1   true [0xc002bba070 0xc002bba088 0xc002bba0a0] [0xc002bba070 0xc002bba088 0xc002bba0a0] [0xc002bba080 0xc002bba098] [0xba6c50 0xba6c50] 0xc0014eb200 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 29 13:40:32.097: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6175 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 29 13:40:32.345: INFO: rc: 1
Jan 29 13:40:32.345: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6175 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0019fa240 exit status 1   true [0xc000640270 0xc0006404a8 0xc0006405b0] [0xc000640270 0xc0006404a8 0xc0006405b0] [0xc0006403d8 0xc0006404e8] [0xba6c50 0xba6c50] 0xc00197de00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 29 13:40:42.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6175 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 29 13:40:42.583: INFO: rc: 1
Jan 29 13:40:42.584: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6175 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0019fa360 exit status 1   true [0xc000640618 0xc000640740 0xc000640960] [0xc000640618 0xc000640740 0xc000640960] [0xc0006406f8 0xc0006408f8] [0xba6c50 0xba6c50] 0xc0000ce7e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 29 13:40:52.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6175 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 29 13:40:52.784: INFO: rc: 1
Jan 29 13:40:52.784: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6175 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001ff8090 exit status 1   true [0xc000ba8020 0xc000ba82b8 0xc000ba83d0] [0xc000ba8020 0xc000ba82b8 0xc000ba83d0] [0xc000ba80b8 0xc000ba8368] [0xba6c50 0xba6c50] 0xc0022d8de0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 29 13:41:02.785: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6175 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 29 13:41:02.979: INFO: rc: 1
Jan 29 13:41:02.979: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6175 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002cd21e0 exit status 1   true [0xc0006c7510 0xc0006c7778 0xc0006c77f8] [0xc0006c7510 0xc0006c7778 0xc0006c77f8] [0xc0006c7760 0xc0006c77e0] [0xba6c50 0xba6c50] 0xc002451500 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 29 13:41:12.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6175 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 29 13:41:13.194: INFO: rc: 1
Jan 29 13:41:13.195: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6175 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0023663c0 exit status 1   true [0xc002bba0a8 0xc002bba0c0 0xc002bba0d8] [0xc002bba0a8 0xc002bba0c0 0xc002bba0d8] [0xc002bba0b8 0xc002bba0d0] [0xba6c50 0xba6c50] 0xc001c4a060 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 29 13:41:23.196: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6175 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 29 13:41:23.382: INFO: rc: 1
Jan 29 13:41:23.383: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6175 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0019fa480 exit status 1   true [0xc000640990 0xc000640a38 0xc000640c18] [0xc000640990 0xc000640a38 0xc000640c18] [0xc000640a00 0xc000640bc8] [0xba6c50 0xba6c50] 0xc001c734a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 29 13:41:33.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6175 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 29 13:41:33.582: INFO: rc: 1
Jan 29 13:41:33.582: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6175 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0023664b0 exit status 1   true [0xc002bba0e0 0xc002bba0f8 0xc002bba110] [0xc002bba0e0 0xc002bba0f8 0xc002bba110] [0xc002bba0f0 0xc002bba108] [0xba6c50 0xba6c50] 0xc002055200 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 29 13:41:43.583: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6175 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 29 13:41:43.757: INFO: rc: 1
Jan 29 13:41:43.757: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6175 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001ff80c0 exit status 1   true [0xc000186000 0xc000ba80b8 0xc000ba8368] [0xc000186000 0xc000ba80b8 0xc000ba8368] [0xc000ba8040 0xc000ba8308] [0xba6c50 0xba6c50] 0xc001b895c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 29 13:41:53.758: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6175 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 29 13:41:53.952: INFO: rc: 1
Jan 29 13:41:53.952: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6175 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002cd2090 exit status 1   true [0xc002bba000 0xc002bba018 0xc002bba030] [0xc002bba000 0xc002bba018 0xc002bba030] [0xc002bba010 0xc002bba028] [0xba6c50 0xba6c50] 0xc0000ce4e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 29 13:42:03.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6175 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 29 13:42:04.137: INFO: rc: 1
Jan 29 13:42:04.137: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6175 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0019fa090 exit status 1   true [0xc0006c6eb8 0xc0006c7260 0xc0006c7510] [0xc0006c6eb8 0xc0006c7260 0xc0006c7510] [0xc0006c7170 0xc0006c7418] [0xba6c50 0xba6c50] 0xc00197d500 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 29 13:42:14.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6175 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 29 13:42:14.338: INFO: rc: 1
Jan 29 13:42:14.338: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: 
Jan 29 13:42:14.338: INFO: Scaling statefulset ss to 0
Jan 29 13:42:14.353: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Jan 29 13:42:14.355: INFO: Deleting all statefulset in ns statefulset-6175
Jan 29 13:42:14.359: INFO: Scaling statefulset ss to 0
Jan 29 13:42:14.374: INFO: Waiting for statefulset status.replicas updated to 0
Jan 29 13:42:14.383: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 13:42:14.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6175" for this suite.
Jan 29 13:42:20.451: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 13:42:20.593: INFO: namespace statefulset-6175 deletion completed in 6.168293996s

• [SLOW TEST:367.661 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 13:42:20.596: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5257.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5257.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5257.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5257.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 29 13:42:32.824: INFO: File wheezy_udp@dns-test-service-3.dns-5257.svc.cluster.local from pod  dns-5257/dns-test-8cfbc95b-14de-4c94-b19b-d42fef14f2d6 contains '' instead of 'foo.example.com.'
Jan 29 13:42:32.830: INFO: File jessie_udp@dns-test-service-3.dns-5257.svc.cluster.local from pod  dns-5257/dns-test-8cfbc95b-14de-4c94-b19b-d42fef14f2d6 contains '' instead of 'foo.example.com.'
Jan 29 13:42:32.830: INFO: Lookups using dns-5257/dns-test-8cfbc95b-14de-4c94-b19b-d42fef14f2d6 failed for: [wheezy_udp@dns-test-service-3.dns-5257.svc.cluster.local jessie_udp@dns-test-service-3.dns-5257.svc.cluster.local]

Jan 29 13:42:37.857: INFO: DNS probes using dns-test-8cfbc95b-14de-4c94-b19b-d42fef14f2d6 succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5257.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5257.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5257.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5257.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 29 13:42:52.072: INFO: File wheezy_udp@dns-test-service-3.dns-5257.svc.cluster.local from pod  dns-5257/dns-test-c6146b96-e6cc-4eee-a8d2-5131416ffc07 contains '' instead of 'bar.example.com.'
Jan 29 13:42:52.079: INFO: File jessie_udp@dns-test-service-3.dns-5257.svc.cluster.local from pod  dns-5257/dns-test-c6146b96-e6cc-4eee-a8d2-5131416ffc07 contains '' instead of 'bar.example.com.'
Jan 29 13:42:52.079: INFO: Lookups using dns-5257/dns-test-c6146b96-e6cc-4eee-a8d2-5131416ffc07 failed for: [wheezy_udp@dns-test-service-3.dns-5257.svc.cluster.local jessie_udp@dns-test-service-3.dns-5257.svc.cluster.local]

Jan 29 13:42:57.096: INFO: File wheezy_udp@dns-test-service-3.dns-5257.svc.cluster.local from pod  dns-5257/dns-test-c6146b96-e6cc-4eee-a8d2-5131416ffc07 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 29 13:42:57.107: INFO: File jessie_udp@dns-test-service-3.dns-5257.svc.cluster.local from pod  dns-5257/dns-test-c6146b96-e6cc-4eee-a8d2-5131416ffc07 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 29 13:42:57.107: INFO: Lookups using dns-5257/dns-test-c6146b96-e6cc-4eee-a8d2-5131416ffc07 failed for: [wheezy_udp@dns-test-service-3.dns-5257.svc.cluster.local jessie_udp@dns-test-service-3.dns-5257.svc.cluster.local]

Jan 29 13:43:02.092: INFO: File wheezy_udp@dns-test-service-3.dns-5257.svc.cluster.local from pod  dns-5257/dns-test-c6146b96-e6cc-4eee-a8d2-5131416ffc07 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 29 13:43:02.097: INFO: File jessie_udp@dns-test-service-3.dns-5257.svc.cluster.local from pod  dns-5257/dns-test-c6146b96-e6cc-4eee-a8d2-5131416ffc07 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 29 13:43:02.097: INFO: Lookups using dns-5257/dns-test-c6146b96-e6cc-4eee-a8d2-5131416ffc07 failed for: [wheezy_udp@dns-test-service-3.dns-5257.svc.cluster.local jessie_udp@dns-test-service-3.dns-5257.svc.cluster.local]

Jan 29 13:43:07.108: INFO: DNS probes using dns-test-c6146b96-e6cc-4eee-a8d2-5131416ffc07 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5257.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-5257.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5257.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-5257.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 29 13:43:21.405: INFO: File wheezy_udp@dns-test-service-3.dns-5257.svc.cluster.local from pod  dns-5257/dns-test-942189fe-8f25-4e3d-8151-c644f07b38c6 contains '' instead of '10.106.157.231'
Jan 29 13:43:21.414: INFO: File jessie_udp@dns-test-service-3.dns-5257.svc.cluster.local from pod  dns-5257/dns-test-942189fe-8f25-4e3d-8151-c644f07b38c6 contains '' instead of '10.106.157.231'
Jan 29 13:43:21.414: INFO: Lookups using dns-5257/dns-test-942189fe-8f25-4e3d-8151-c644f07b38c6 failed for: [wheezy_udp@dns-test-service-3.dns-5257.svc.cluster.local jessie_udp@dns-test-service-3.dns-5257.svc.cluster.local]

Jan 29 13:43:26.445: INFO: File jessie_udp@dns-test-service-3.dns-5257.svc.cluster.local from pod  dns-5257/dns-test-942189fe-8f25-4e3d-8151-c644f07b38c6 contains '' instead of '10.106.157.231'
Jan 29 13:43:26.445: INFO: Lookups using dns-5257/dns-test-942189fe-8f25-4e3d-8151-c644f07b38c6 failed for: [jessie_udp@dns-test-service-3.dns-5257.svc.cluster.local]

Jan 29 13:43:31.445: INFO: DNS probes using dns-test-942189fe-8f25-4e3d-8151-c644f07b38c6 succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 13:43:31.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5257" for this suite.
Jan 29 13:43:39.793: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 13:43:39.932: INFO: namespace dns-5257 deletion completed in 8.195951252s

• [SLOW TEST:79.336 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 13:43:39.933: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0129 13:43:55.394161       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 29 13:43:55.394: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 13:43:55.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5373" for this suite.
Jan 29 13:44:05.267: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 13:44:05.437: INFO: namespace gc-5373 deletion completed in 8.997940165s

• [SLOW TEST:25.504 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 13:44:05.438: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test use defaults
Jan 29 13:44:05.802: INFO: Waiting up to 5m0s for pod "client-containers-256a6500-f81b-4191-ae78-a0078afd5705" in namespace "containers-1775" to be "success or failure"
Jan 29 13:44:05.851: INFO: Pod "client-containers-256a6500-f81b-4191-ae78-a0078afd5705": Phase="Pending", Reason="", readiness=false. Elapsed: 48.663609ms
Jan 29 13:44:08.092: INFO: Pod "client-containers-256a6500-f81b-4191-ae78-a0078afd5705": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289407279s
Jan 29 13:44:10.102: INFO: Pod "client-containers-256a6500-f81b-4191-ae78-a0078afd5705": Phase="Pending", Reason="", readiness=false. Elapsed: 4.300067961s
Jan 29 13:44:12.134: INFO: Pod "client-containers-256a6500-f81b-4191-ae78-a0078afd5705": Phase="Pending", Reason="", readiness=false. Elapsed: 6.331116379s
Jan 29 13:44:14.147: INFO: Pod "client-containers-256a6500-f81b-4191-ae78-a0078afd5705": Phase="Pending", Reason="", readiness=false. Elapsed: 8.344529303s
Jan 29 13:44:16.157: INFO: Pod "client-containers-256a6500-f81b-4191-ae78-a0078afd5705": Phase="Pending", Reason="", readiness=false. Elapsed: 10.354229315s
Jan 29 13:44:18.171: INFO: Pod "client-containers-256a6500-f81b-4191-ae78-a0078afd5705": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.368729188s
STEP: Saw pod success
Jan 29 13:44:18.171: INFO: Pod "client-containers-256a6500-f81b-4191-ae78-a0078afd5705" satisfied condition "success or failure"
Jan 29 13:44:18.182: INFO: Trying to get logs from node iruya-node pod client-containers-256a6500-f81b-4191-ae78-a0078afd5705 container test-container: 
STEP: delete the pod
Jan 29 13:44:18.351: INFO: Waiting for pod client-containers-256a6500-f81b-4191-ae78-a0078afd5705 to disappear
Jan 29 13:44:18.357: INFO: Pod client-containers-256a6500-f81b-4191-ae78-a0078afd5705 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 13:44:18.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-1775" for this suite.
Jan 29 13:44:24.412: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 13:44:24.592: INFO: namespace containers-1775 deletion completed in 6.20398551s

• [SLOW TEST:19.155 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 13:44:24.594: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-4316
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Jan 29 13:44:24.772: INFO: Found 0 stateful pods, waiting for 3
Jan 29 13:44:35.213: INFO: Found 2 stateful pods, waiting for 3
Jan 29 13:44:44.796: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 29 13:44:44.796: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 29 13:44:44.796: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 29 13:44:54.785: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 29 13:44:54.785: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 29 13:44:54.785: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Jan 29 13:44:54.798: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4316 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 29 13:44:57.078: INFO: stderr: "I0129 13:44:56.787073    2379 log.go:172] (0xc0006ce420) (0xc0006f8640) Create stream\nI0129 13:44:56.787474    2379 log.go:172] (0xc0006ce420) (0xc0006f8640) Stream added, broadcasting: 1\nI0129 13:44:56.794757    2379 log.go:172] (0xc0006ce420) Reply frame received for 1\nI0129 13:44:56.794812    2379 log.go:172] (0xc0006ce420) (0xc00061a320) Create stream\nI0129 13:44:56.794830    2379 log.go:172] (0xc0006ce420) (0xc00061a320) Stream added, broadcasting: 3\nI0129 13:44:56.797428    2379 log.go:172] (0xc0006ce420) Reply frame received for 3\nI0129 13:44:56.797467    2379 log.go:172] (0xc0006ce420) (0xc0007a2000) Create stream\nI0129 13:44:56.797505    2379 log.go:172] (0xc0006ce420) (0xc0007a2000) Stream added, broadcasting: 5\nI0129 13:44:56.799842    2379 log.go:172] (0xc0006ce420) Reply frame received for 5\nI0129 13:44:56.947906    2379 log.go:172] (0xc0006ce420) Data frame received for 5\nI0129 13:44:56.948102    2379 log.go:172] (0xc0007a2000) (5) Data frame handling\nI0129 13:44:56.948174    2379 log.go:172] (0xc0007a2000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0129 13:44:56.985472    2379 log.go:172] (0xc0006ce420) Data frame received for 3\nI0129 13:44:56.985575    2379 log.go:172] (0xc00061a320) (3) Data frame handling\nI0129 13:44:56.985602    2379 log.go:172] (0xc00061a320) (3) Data frame sent\nI0129 13:44:57.064421    2379 log.go:172] (0xc0006ce420) Data frame received for 1\nI0129 13:44:57.064510    2379 log.go:172] (0xc0006ce420) (0xc00061a320) Stream removed, broadcasting: 3\nI0129 13:44:57.064648    2379 log.go:172] (0xc0006ce420) (0xc0007a2000) Stream removed, broadcasting: 5\nI0129 13:44:57.064729    2379 log.go:172] (0xc0006f8640) (1) Data frame handling\nI0129 13:44:57.064756    2379 log.go:172] (0xc0006f8640) (1) Data frame sent\nI0129 13:44:57.064763    2379 log.go:172] (0xc0006ce420) (0xc0006f8640) Stream removed, broadcasting: 1\nI0129 13:44:57.064774    2379 log.go:172] (0xc0006ce420) Go away received\nI0129 13:44:57.065932    2379 log.go:172] (0xc0006ce420) (0xc0006f8640) Stream removed, broadcasting: 1\nI0129 13:44:57.065946    2379 log.go:172] (0xc0006ce420) (0xc00061a320) Stream removed, broadcasting: 3\nI0129 13:44:57.065951    2379 log.go:172] (0xc0006ce420) (0xc0007a2000) Stream removed, broadcasting: 5\n"
Jan 29 13:44:57.079: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 29 13:44:57.079: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Jan 29 13:45:07.154: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Jan 29 13:45:17.228: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4316 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 29 13:45:17.598: INFO: stderr: "I0129 13:45:17.420841    2409 log.go:172] (0xc000940370) (0xc00065a820) Create stream\nI0129 13:45:17.421035    2409 log.go:172] (0xc000940370) (0xc00065a820) Stream added, broadcasting: 1\nI0129 13:45:17.425822    2409 log.go:172] (0xc000940370) Reply frame received for 1\nI0129 13:45:17.425909    2409 log.go:172] (0xc000940370) (0xc00065a8c0) Create stream\nI0129 13:45:17.425922    2409 log.go:172] (0xc000940370) (0xc00065a8c0) Stream added, broadcasting: 3\nI0129 13:45:17.427053    2409 log.go:172] (0xc000940370) Reply frame received for 3\nI0129 13:45:17.427100    2409 log.go:172] (0xc000940370) (0xc0007dc000) Create stream\nI0129 13:45:17.427126    2409 log.go:172] (0xc000940370) (0xc0007dc000) Stream added, broadcasting: 5\nI0129 13:45:17.427953    2409 log.go:172] (0xc000940370) Reply frame received for 5\nI0129 13:45:17.499575    2409 log.go:172] (0xc000940370) Data frame received for 3\nI0129 13:45:17.499700    2409 log.go:172] (0xc00065a8c0) (3) Data frame handling\nI0129 13:45:17.499747    2409 log.go:172] (0xc00065a8c0) (3) Data frame sent\nI0129 13:45:17.500141    2409 log.go:172] (0xc000940370) Data frame received for 5\nI0129 13:45:17.500166    2409 log.go:172] (0xc0007dc000) (5) Data frame handling\nI0129 13:45:17.500197    2409 log.go:172] (0xc0007dc000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0129 13:45:17.588945    2409 log.go:172] (0xc000940370) Data frame received for 1\nI0129 13:45:17.589563    2409 log.go:172] (0xc000940370) (0xc00065a8c0) Stream removed, broadcasting: 3\nI0129 13:45:17.589841    2409 log.go:172] (0xc00065a820) (1) Data frame handling\nI0129 13:45:17.589908    2409 log.go:172] (0xc00065a820) (1) Data frame sent\nI0129 13:45:17.590050    2409 log.go:172] (0xc000940370) (0xc0007dc000) Stream removed, broadcasting: 5\nI0129 13:45:17.590128    2409 log.go:172] (0xc000940370) (0xc00065a820) Stream removed, broadcasting: 1\nI0129 13:45:17.590183    2409 log.go:172] (0xc000940370) Go away received\nI0129 13:45:17.591082    2409 log.go:172] (0xc000940370) (0xc00065a820) Stream removed, broadcasting: 1\nI0129 13:45:17.591097    2409 log.go:172] (0xc000940370) (0xc00065a8c0) Stream removed, broadcasting: 3\nI0129 13:45:17.591103    2409 log.go:172] (0xc000940370) (0xc0007dc000) Stream removed, broadcasting: 5\n"
Jan 29 13:45:17.598: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 29 13:45:17.598: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 29 13:45:27.638: INFO: Waiting for StatefulSet statefulset-4316/ss2 to complete update
Jan 29 13:45:27.638: INFO: Waiting for Pod statefulset-4316/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 29 13:45:27.638: INFO: Waiting for Pod statefulset-4316/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 29 13:45:37.656: INFO: Waiting for StatefulSet statefulset-4316/ss2 to complete update
Jan 29 13:45:37.657: INFO: Waiting for Pod statefulset-4316/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 29 13:45:37.657: INFO: Waiting for Pod statefulset-4316/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 29 13:45:47.661: INFO: Waiting for StatefulSet statefulset-4316/ss2 to complete update
Jan 29 13:45:47.661: INFO: Waiting for Pod statefulset-4316/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 29 13:45:57.654: INFO: Waiting for StatefulSet statefulset-4316/ss2 to complete update
STEP: Rolling back to a previous revision
Jan 29 13:46:07.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4316 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 29 13:46:08.183: INFO: stderr: "I0129 13:46:07.914449    2431 log.go:172] (0xc0008f2370) (0xc0004cc8c0) Create stream\nI0129 13:46:07.914806    2431 log.go:172] (0xc0008f2370) (0xc0004cc8c0) Stream added, broadcasting: 1\nI0129 13:46:07.920764    2431 log.go:172] (0xc0008f2370) Reply frame received for 1\nI0129 13:46:07.920969    2431 log.go:172] (0xc0008f2370) (0xc000934000) Create stream\nI0129 13:46:07.920985    2431 log.go:172] (0xc0008f2370) (0xc000934000) Stream added, broadcasting: 3\nI0129 13:46:07.923211    2431 log.go:172] (0xc0008f2370) Reply frame received for 3\nI0129 13:46:07.923286    2431 log.go:172] (0xc0008f2370) (0xc000772000) Create stream\nI0129 13:46:07.923300    2431 log.go:172] (0xc0008f2370) (0xc000772000) Stream added, broadcasting: 5\nI0129 13:46:07.924687    2431 log.go:172] (0xc0008f2370) Reply frame received for 5\nI0129 13:46:08.039128    2431 log.go:172] (0xc0008f2370) Data frame received for 5\nI0129 13:46:08.039233    2431 log.go:172] (0xc000772000) (5) Data frame handling\nI0129 13:46:08.039269    2431 log.go:172] (0xc000772000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0129 13:46:08.073787    2431 log.go:172] (0xc0008f2370) Data frame received for 3\nI0129 13:46:08.073828    2431 log.go:172] (0xc000934000) (3) Data frame handling\nI0129 13:46:08.073839    2431 log.go:172] (0xc000934000) (3) Data frame sent\nI0129 13:46:08.172915    2431 log.go:172] (0xc0008f2370) Data frame received for 1\nI0129 13:46:08.173043    2431 log.go:172] (0xc0004cc8c0) (1) Data frame handling\nI0129 13:46:08.173093    2431 log.go:172] (0xc0004cc8c0) (1) Data frame sent\nI0129 13:46:08.173145    2431 log.go:172] (0xc0008f2370) (0xc0004cc8c0) Stream removed, broadcasting: 1\nI0129 13:46:08.173435    2431 log.go:172] (0xc0008f2370) (0xc000934000) Stream removed, broadcasting: 3\nI0129 13:46:08.175349    2431 log.go:172] (0xc0008f2370) (0xc000772000) Stream removed, broadcasting: 5\nI0129 13:46:08.175621    2431 log.go:172] (0xc0008f2370) Go away received\nI0129 13:46:08.175750    2431 log.go:172] (0xc0008f2370) (0xc0004cc8c0) Stream removed, broadcasting: 1\nI0129 13:46:08.175772    2431 log.go:172] (0xc0008f2370) (0xc000934000) Stream removed, broadcasting: 3\nI0129 13:46:08.175791    2431 log.go:172] (0xc0008f2370) (0xc000772000) Stream removed, broadcasting: 5\n"
Jan 29 13:46:08.184: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 29 13:46:08.184: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 29 13:46:18.283: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Jan 29 13:46:28.406: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4316 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 29 13:46:29.429: INFO: stderr: "I0129 13:46:29.126629    2452 log.go:172] (0xc0004d8790) (0xc00064ca00) Create stream\nI0129 13:46:29.127061    2452 log.go:172] (0xc0004d8790) (0xc00064ca00) Stream added, broadcasting: 1\nI0129 13:46:29.135203    2452 log.go:172] (0xc0004d8790) Reply frame received for 1\nI0129 13:46:29.135606    2452 log.go:172] (0xc0004d8790) (0xc0005d4000) Create stream\nI0129 13:46:29.135666    2452 log.go:172] (0xc0004d8790) (0xc0005d4000) Stream added, broadcasting: 3\nI0129 13:46:29.138402    2452 log.go:172] (0xc0004d8790) Reply frame received for 3\nI0129 13:46:29.138466    2452 log.go:172] (0xc0004d8790) (0xc0005d40a0) Create stream\nI0129 13:46:29.138480    2452 log.go:172] (0xc0004d8790) (0xc0005d40a0) Stream added, broadcasting: 5\nI0129 13:46:29.140855    2452 log.go:172] (0xc0004d8790) Reply frame received for 5\nI0129 13:46:29.254370    2452 log.go:172] (0xc0004d8790) Data frame received for 3\nI0129 13:46:29.254537    2452 log.go:172] (0xc0005d4000) (3) Data frame handling\nI0129 13:46:29.254605    2452 log.go:172] (0xc0005d4000) (3) Data frame sent\nI0129 13:46:29.254786    2452 log.go:172] (0xc0004d8790) Data frame received for 5\nI0129 13:46:29.254825    2452 log.go:172] (0xc0005d40a0) (5) Data frame handling\nI0129 13:46:29.254866    2452 log.go:172] (0xc0005d40a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0129 13:46:29.403953    2452 log.go:172] (0xc0004d8790) Data frame received for 1\nI0129 13:46:29.404729    2452 log.go:172] (0xc0004d8790) (0xc0005d40a0) Stream removed, broadcasting: 5\nI0129 13:46:29.404951    2452 log.go:172] (0xc00064ca00) (1) Data frame handling\nI0129 13:46:29.404999    2452 log.go:172] (0xc00064ca00) (1) Data frame sent\nI0129 13:46:29.405415    2452 log.go:172] (0xc0004d8790) (0xc0005d4000) Stream removed, broadcasting: 3\nI0129 13:46:29.405518    2452 log.go:172] (0xc0004d8790) (0xc00064ca00) Stream removed, broadcasting: 1\nI0129 13:46:29.405568    2452 log.go:172] (0xc0004d8790) Go away received\nI0129 13:46:29.407435    2452 log.go:172] (0xc0004d8790) (0xc00064ca00) Stream removed, broadcasting: 1\nI0129 13:46:29.407450    2452 log.go:172] (0xc0004d8790) (0xc0005d4000) Stream removed, broadcasting: 3\nI0129 13:46:29.407457    2452 log.go:172] (0xc0004d8790) (0xc0005d40a0) Stream removed, broadcasting: 5\n"
Jan 29 13:46:29.430: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 29 13:46:29.430: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 29 13:46:39.471: INFO: Waiting for StatefulSet statefulset-4316/ss2 to complete update
Jan 29 13:46:39.471: INFO: Waiting for Pod statefulset-4316/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 29 13:46:39.471: INFO: Waiting for Pod statefulset-4316/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 29 13:46:39.471: INFO: Waiting for Pod statefulset-4316/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 29 13:46:49.486: INFO: Waiting for StatefulSet statefulset-4316/ss2 to complete update
Jan 29 13:46:49.486: INFO: Waiting for Pod statefulset-4316/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 29 13:46:49.486: INFO: Waiting for Pod statefulset-4316/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 29 13:46:59.484: INFO: Waiting for StatefulSet statefulset-4316/ss2 to complete update
Jan 29 13:46:59.484: INFO: Waiting for Pod statefulset-4316/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 29 13:46:59.484: INFO: Waiting for Pod statefulset-4316/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 29 13:47:09.758: INFO: Waiting for StatefulSet statefulset-4316/ss2 to complete update
Jan 29 13:47:09.758: INFO: Waiting for Pod statefulset-4316/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 29 13:47:19.481: INFO: Waiting for StatefulSet statefulset-4316/ss2 to complete update
Jan 29 13:47:19.481: INFO: Waiting for Pod statefulset-4316/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 29 13:47:29.487: INFO: Waiting for StatefulSet statefulset-4316/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Jan 29 13:47:39.494: INFO: Deleting all statefulset in ns statefulset-4316
Jan 29 13:47:39.502: INFO: Scaling statefulset ss2 to 0
Jan 29 13:48:09.535: INFO: Waiting for statefulset status.replicas updated to 0
Jan 29 13:48:09.540: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 13:48:09.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-4316" for this suite.
Jan 29 13:48:17.639: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 13:48:17.815: INFO: namespace statefulset-4316 deletion completed in 8.201178971s

• [SLOW TEST:233.222 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 13:48:17.816: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
STEP: reading a file in the container
Jan 29 13:48:26.588: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9994 pod-service-account-01976a8b-0299-49f0-bac3-7448717ca450 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Jan 29 13:48:27.184: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9994 pod-service-account-01976a8b-0299-49f0-bac3-7448717ca450 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Jan 29 13:48:27.629: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9994 pod-service-account-01976a8b-0299-49f0-bac3-7448717ca450 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 13:48:28.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-9994" for this suite.
Jan 29 13:48:34.222: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 13:48:34.358: INFO: namespace svcaccounts-9994 deletion completed in 6.155906444s

• [SLOW TEST:16.542 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 13:48:34.359: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 29 13:48:34.474: INFO: Waiting up to 5m0s for pod "downwardapi-volume-704dae07-ab17-4df3-82ff-14f4358f4cb9" in namespace "downward-api-962" to be "success or failure"
Jan 29 13:48:34.487: INFO: Pod "downwardapi-volume-704dae07-ab17-4df3-82ff-14f4358f4cb9": Phase="Pending", Reason="", readiness=false. Elapsed: 13.140638ms
Jan 29 13:48:36.504: INFO: Pod "downwardapi-volume-704dae07-ab17-4df3-82ff-14f4358f4cb9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029582161s
Jan 29 13:48:38.520: INFO: Pod "downwardapi-volume-704dae07-ab17-4df3-82ff-14f4358f4cb9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045631985s
Jan 29 13:48:40.538: INFO: Pod "downwardapi-volume-704dae07-ab17-4df3-82ff-14f4358f4cb9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064116076s
Jan 29 13:48:42.561: INFO: Pod "downwardapi-volume-704dae07-ab17-4df3-82ff-14f4358f4cb9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.086543798s
Jan 29 13:48:44.578: INFO: Pod "downwardapi-volume-704dae07-ab17-4df3-82ff-14f4358f4cb9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.103330142s
STEP: Saw pod success
Jan 29 13:48:44.578: INFO: Pod "downwardapi-volume-704dae07-ab17-4df3-82ff-14f4358f4cb9" satisfied condition "success or failure"
Jan 29 13:48:44.587: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-704dae07-ab17-4df3-82ff-14f4358f4cb9 container client-container: 
STEP: delete the pod
Jan 29 13:48:44.673: INFO: Waiting for pod downwardapi-volume-704dae07-ab17-4df3-82ff-14f4358f4cb9 to disappear
Jan 29 13:48:44.775: INFO: Pod downwardapi-volume-704dae07-ab17-4df3-82ff-14f4358f4cb9 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 13:48:44.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-962" for this suite.
Jan 29 13:48:50.817: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 13:48:51.033: INFO: namespace downward-api-962 deletion completed in 6.246692948s

• [SLOW TEST:16.674 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 13:48:51.033: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 29 13:48:51.569: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Jan 29 13:48:51.746: INFO: stderr: ""
Jan 29 13:48:51.746: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T16:55:20Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.1\", GitCommit:\"4485c6f18cee9a5d3c3b4e523bd27972b1b53892\", GitTreeState:\"clean\", BuildDate:\"2019-07-18T09:09:21Z\", GoVersion:\"go1.12.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 13:48:51.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2417" for this suite.
Jan 29 13:48:57.791: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 13:48:57.986: INFO: namespace kubectl-2417 deletion completed in 6.229688536s

• [SLOW TEST:6.953 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 13:48:57.987: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-projected-jvf8
STEP: Creating a pod to test atomic-volume-subpath
Jan 29 13:48:58.209: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-jvf8" in namespace "subpath-5710" to be "success or failure"
Jan 29 13:48:58.215: INFO: Pod "pod-subpath-test-projected-jvf8": Phase="Pending", Reason="", readiness=false. Elapsed: 5.738501ms
Jan 29 13:49:00.260: INFO: Pod "pod-subpath-test-projected-jvf8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050115993s
Jan 29 13:49:02.280: INFO: Pod "pod-subpath-test-projected-jvf8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.070224479s
Jan 29 13:49:04.302: INFO: Pod "pod-subpath-test-projected-jvf8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.092181166s
Jan 29 13:49:06.312: INFO: Pod "pod-subpath-test-projected-jvf8": Phase="Running", Reason="", readiness=true. Elapsed: 8.10279816s
Jan 29 13:49:08.324: INFO: Pod "pod-subpath-test-projected-jvf8": Phase="Running", Reason="", readiness=true. Elapsed: 10.114886678s
Jan 29 13:49:10.369: INFO: Pod "pod-subpath-test-projected-jvf8": Phase="Running", Reason="", readiness=true. Elapsed: 12.159948359s
Jan 29 13:49:12.375: INFO: Pod "pod-subpath-test-projected-jvf8": Phase="Running", Reason="", readiness=true. Elapsed: 14.165117311s
Jan 29 13:49:14.391: INFO: Pod "pod-subpath-test-projected-jvf8": Phase="Running", Reason="", readiness=true. Elapsed: 16.181222723s
Jan 29 13:49:16.397: INFO: Pod "pod-subpath-test-projected-jvf8": Phase="Running", Reason="", readiness=true. Elapsed: 18.186982613s
Jan 29 13:49:18.467: INFO: Pod "pod-subpath-test-projected-jvf8": Phase="Running", Reason="", readiness=true. Elapsed: 20.257592437s
Jan 29 13:49:20.482: INFO: Pod "pod-subpath-test-projected-jvf8": Phase="Running", Reason="", readiness=true. Elapsed: 22.272639761s
Jan 29 13:49:22.658: INFO: Pod "pod-subpath-test-projected-jvf8": Phase="Running", Reason="", readiness=true. Elapsed: 24.44862987s
Jan 29 13:49:24.678: INFO: Pod "pod-subpath-test-projected-jvf8": Phase="Running", Reason="", readiness=true. Elapsed: 26.468307492s
Jan 29 13:49:26.690: INFO: Pod "pod-subpath-test-projected-jvf8": Phase="Running", Reason="", readiness=true. Elapsed: 28.480144353s
Jan 29 13:49:28.700: INFO: Pod "pod-subpath-test-projected-jvf8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.490463751s
STEP: Saw pod success
Jan 29 13:49:28.700: INFO: Pod "pod-subpath-test-projected-jvf8" satisfied condition "success or failure"
Jan 29 13:49:28.703: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-projected-jvf8 container test-container-subpath-projected-jvf8: 
STEP: delete the pod
Jan 29 13:49:28.792: INFO: Waiting for pod pod-subpath-test-projected-jvf8 to disappear
Jan 29 13:49:28.804: INFO: Pod pod-subpath-test-projected-jvf8 no longer exists
STEP: Deleting pod pod-subpath-test-projected-jvf8
Jan 29 13:49:28.804: INFO: Deleting pod "pod-subpath-test-projected-jvf8" in namespace "subpath-5710"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 13:49:28.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-5710" for this suite.
Jan 29 13:49:34.867: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 13:49:35.048: INFO: namespace subpath-5710 deletion completed in 6.23565362s

• [SLOW TEST:37.060 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 13:49:35.048: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan 29 13:49:51.260: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 29 13:49:51.273: INFO: Pod pod-with-prestop-http-hook still exists
Jan 29 13:49:53.274: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 29 13:49:53.286: INFO: Pod pod-with-prestop-http-hook still exists
Jan 29 13:49:55.274: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 29 13:49:55.285: INFO: Pod pod-with-prestop-http-hook still exists
Jan 29 13:49:57.274: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 29 13:49:57.305: INFO: Pod pod-with-prestop-http-hook still exists
Jan 29 13:49:59.274: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 29 13:49:59.286: INFO: Pod pod-with-prestop-http-hook still exists
Jan 29 13:50:01.274: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 29 13:50:01.285: INFO: Pod pod-with-prestop-http-hook still exists
Jan 29 13:50:03.274: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 29 13:50:03.283: INFO: Pod pod-with-prestop-http-hook still exists
Jan 29 13:50:05.274: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 29 13:50:05.284: INFO: Pod pod-with-prestop-http-hook still exists
Jan 29 13:50:07.274: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 29 13:50:07.282: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 13:50:07.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-7894" for this suite.
Jan 29 13:50:29.376: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 13:50:29.522: INFO: namespace container-lifecycle-hook-7894 deletion completed in 22.168391105s

• [SLOW TEST:54.474 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 13:50:29.522: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-442427ec-ef6e-4cae-8ac5-b024dde04a98 in namespace container-probe-1027
Jan 29 13:50:39.935: INFO: Started pod busybox-442427ec-ef6e-4cae-8ac5-b024dde04a98 in namespace container-probe-1027
STEP: checking the pod's current state and verifying that restartCount is present
Jan 29 13:50:39.939: INFO: Initial restart count of pod busybox-442427ec-ef6e-4cae-8ac5-b024dde04a98 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 13:54:41.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1027" for this suite.
Jan 29 13:54:47.758: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 13:54:47.885: INFO: namespace container-probe-1027 deletion completed in 6.198969615s

• [SLOW TEST:258.364 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 13:54:47.887: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating cluster-info
Jan 29 13:54:47.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Jan 29 13:54:48.119: INFO: stderr: ""
Jan 29 13:54:48.119: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 13:54:48.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1119" for this suite.
Jan 29 13:54:54.149: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 13:54:54.261: INFO: namespace kubectl-1119 deletion completed in 6.135678386s

• [SLOW TEST:6.375 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 13:54:54.262: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-f322f3ec-705a-49fd-8d25-dbb556e76955
STEP: Creating configMap with name cm-test-opt-upd-0d9541bf-65e2-48ee-866c-42f864db652c
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-f322f3ec-705a-49fd-8d25-dbb556e76955
STEP: Updating configmap cm-test-opt-upd-0d9541bf-65e2-48ee-866c-42f864db652c
STEP: Creating configMap with name cm-test-opt-create-c12eaa5c-c806-4bd2-ab23-418a55252432
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 13:56:18.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7467" for this suite.
Jan 29 13:56:40.573: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 13:56:40.685: INFO: namespace configmap-7467 deletion completed in 22.170079174s

• [SLOW TEST:106.423 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 13:56:40.686: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 29 13:56:40.816: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e030a9c4-2e68-441e-aa5d-b6b69638e36f" in namespace "projected-8995" to be "success or failure"
Jan 29 13:56:40.822: INFO: Pod "downwardapi-volume-e030a9c4-2e68-441e-aa5d-b6b69638e36f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.909278ms
Jan 29 13:56:42.833: INFO: Pod "downwardapi-volume-e030a9c4-2e68-441e-aa5d-b6b69638e36f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017208629s
Jan 29 13:56:44.846: INFO: Pod "downwardapi-volume-e030a9c4-2e68-441e-aa5d-b6b69638e36f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029785563s
Jan 29 13:56:46.861: INFO: Pod "downwardapi-volume-e030a9c4-2e68-441e-aa5d-b6b69638e36f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044874803s
Jan 29 13:56:48.869: INFO: Pod "downwardapi-volume-e030a9c4-2e68-441e-aa5d-b6b69638e36f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.052683415s
STEP: Saw pod success
Jan 29 13:56:48.869: INFO: Pod "downwardapi-volume-e030a9c4-2e68-441e-aa5d-b6b69638e36f" satisfied condition "success or failure"
Jan 29 13:56:48.876: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-e030a9c4-2e68-441e-aa5d-b6b69638e36f container client-container: 
STEP: delete the pod
Jan 29 13:56:48.972: INFO: Waiting for pod downwardapi-volume-e030a9c4-2e68-441e-aa5d-b6b69638e36f to disappear
Jan 29 13:56:48.997: INFO: Pod downwardapi-volume-e030a9c4-2e68-441e-aa5d-b6b69638e36f no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 13:56:48.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8995" for this suite.
Jan 29 13:56:55.141: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 13:56:55.257: INFO: namespace projected-8995 deletion completed in 6.251305536s

• [SLOW TEST:14.571 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 13:56:55.257: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name projected-secret-test-ea3f6787-36cf-4e10-b274-c3322c5103d0
STEP: Creating a pod to test consume secrets
Jan 29 13:56:55.392: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-bfe72700-d7bd-4cbe-bbce-a6eaf79e9c45" in namespace "projected-5612" to be "success or failure"
Jan 29 13:56:55.412: INFO: Pod "pod-projected-secrets-bfe72700-d7bd-4cbe-bbce-a6eaf79e9c45": Phase="Pending", Reason="", readiness=false. Elapsed: 20.159027ms
Jan 29 13:56:57.421: INFO: Pod "pod-projected-secrets-bfe72700-d7bd-4cbe-bbce-a6eaf79e9c45": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029129884s
Jan 29 13:56:59.511: INFO: Pod "pod-projected-secrets-bfe72700-d7bd-4cbe-bbce-a6eaf79e9c45": Phase="Pending", Reason="", readiness=false. Elapsed: 4.118916017s
Jan 29 13:57:01.519: INFO: Pod "pod-projected-secrets-bfe72700-d7bd-4cbe-bbce-a6eaf79e9c45": Phase="Pending", Reason="", readiness=false. Elapsed: 6.126660557s
Jan 29 13:57:03.526: INFO: Pod "pod-projected-secrets-bfe72700-d7bd-4cbe-bbce-a6eaf79e9c45": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.134248003s
STEP: Saw pod success
Jan 29 13:57:03.526: INFO: Pod "pod-projected-secrets-bfe72700-d7bd-4cbe-bbce-a6eaf79e9c45" satisfied condition "success or failure"
Jan 29 13:57:03.532: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-bfe72700-d7bd-4cbe-bbce-a6eaf79e9c45 container secret-volume-test: 
STEP: delete the pod
Jan 29 13:57:03.619: INFO: Waiting for pod pod-projected-secrets-bfe72700-d7bd-4cbe-bbce-a6eaf79e9c45 to disappear
Jan 29 13:57:03.632: INFO: Pod pod-projected-secrets-bfe72700-d7bd-4cbe-bbce-a6eaf79e9c45 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 13:57:03.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5612" for this suite.
Jan 29 13:57:09.716: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 13:57:09.929: INFO: namespace projected-5612 deletion completed in 6.283019414s

• [SLOW TEST:14.672 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 13:57:09.929: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-4b715020-608e-4137-99a7-f737b603b048 in namespace container-probe-6872
Jan 29 13:57:18.096: INFO: Started pod liveness-4b715020-608e-4137-99a7-f737b603b048 in namespace container-probe-6872
STEP: checking the pod's current state and verifying that restartCount is present
Jan 29 13:57:18.101: INFO: Initial restart count of pod liveness-4b715020-608e-4137-99a7-f737b603b048 is 0
Jan 29 13:57:34.240: INFO: Restart count of pod container-probe-6872/liveness-4b715020-608e-4137-99a7-f737b603b048 is now 1 (16.138768747s elapsed)
Jan 29 13:57:52.380: INFO: Restart count of pod container-probe-6872/liveness-4b715020-608e-4137-99a7-f737b603b048 is now 2 (34.278722626s elapsed)
Jan 29 13:58:12.848: INFO: Restart count of pod container-probe-6872/liveness-4b715020-608e-4137-99a7-f737b603b048 is now 3 (54.746569731s elapsed)
Jan 29 13:58:32.963: INFO: Restart count of pod container-probe-6872/liveness-4b715020-608e-4137-99a7-f737b603b048 is now 4 (1m14.862086402s elapsed)
Jan 29 13:59:31.306: INFO: Restart count of pod container-probe-6872/liveness-4b715020-608e-4137-99a7-f737b603b048 is now 5 (2m13.204394307s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 13:59:31.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-6872" for this suite.
Jan 29 13:59:37.458: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 13:59:37.607: INFO: namespace container-probe-6872 deletion completed in 6.244133722s

• [SLOW TEST:147.678 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 13:59:37.608: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override command
Jan 29 13:59:37.722: INFO: Waiting up to 5m0s for pod "client-containers-fd2c9083-5ec7-450a-b7b4-d5284106e1c8" in namespace "containers-5531" to be "success or failure"
Jan 29 13:59:37.732: INFO: Pod "client-containers-fd2c9083-5ec7-450a-b7b4-d5284106e1c8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.987317ms
Jan 29 13:59:39.756: INFO: Pod "client-containers-fd2c9083-5ec7-450a-b7b4-d5284106e1c8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03289802s
Jan 29 13:59:41.763: INFO: Pod "client-containers-fd2c9083-5ec7-450a-b7b4-d5284106e1c8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040281506s
Jan 29 13:59:43.786: INFO: Pod "client-containers-fd2c9083-5ec7-450a-b7b4-d5284106e1c8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062938909s
Jan 29 13:59:45.801: INFO: Pod "client-containers-fd2c9083-5ec7-450a-b7b4-d5284106e1c8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.078436746s
STEP: Saw pod success
Jan 29 13:59:45.801: INFO: Pod "client-containers-fd2c9083-5ec7-450a-b7b4-d5284106e1c8" satisfied condition "success or failure"
Jan 29 13:59:45.807: INFO: Trying to get logs from node iruya-node pod client-containers-fd2c9083-5ec7-450a-b7b4-d5284106e1c8 container test-container: 
STEP: delete the pod
Jan 29 13:59:46.063: INFO: Waiting for pod client-containers-fd2c9083-5ec7-450a-b7b4-d5284106e1c8 to disappear
Jan 29 13:59:46.073: INFO: Pod client-containers-fd2c9083-5ec7-450a-b7b4-d5284106e1c8 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 13:59:46.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-5531" for this suite.
Jan 29 13:59:52.150: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 13:59:52.266: INFO: namespace containers-5531 deletion completed in 6.171702799s

• [SLOW TEST:14.658 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 13:59:52.267: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-eb30390e-143d-418c-b2cc-721dd5d0ca5c in namespace container-probe-8204
Jan 29 14:00:00.385: INFO: Started pod liveness-eb30390e-143d-418c-b2cc-721dd5d0ca5c in namespace container-probe-8204
STEP: checking the pod's current state and verifying that restartCount is present
Jan 29 14:00:00.390: INFO: Initial restart count of pod liveness-eb30390e-143d-418c-b2cc-721dd5d0ca5c is 0
Jan 29 14:00:24.540: INFO: Restart count of pod container-probe-8204/liveness-eb30390e-143d-418c-b2cc-721dd5d0ca5c is now 1 (24.150803982s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:00:24.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8204" for this suite.
Jan 29 14:00:30.715: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:00:30.832: INFO: namespace container-probe-8204 deletion completed in 6.163710777s

• [SLOW TEST:38.565 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:00:30.834: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-5654
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-5654
STEP: Creating statefulset with conflicting port in namespace statefulset-5654
STEP: Waiting until pod test-pod will start running in namespace statefulset-5654
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-5654
Jan 29 14:00:41.100: INFO: Observed stateful pod in namespace: statefulset-5654, name: ss-0, uid: 6a552dbb-0d1a-443b-a5f2-97680f2af908, status phase: Pending. Waiting for statefulset controller to delete.
Jan 29 14:00:46.526: INFO: Observed stateful pod in namespace: statefulset-5654, name: ss-0, uid: 6a552dbb-0d1a-443b-a5f2-97680f2af908, status phase: Failed. Waiting for statefulset controller to delete.
Jan 29 14:00:46.646: INFO: Observed stateful pod in namespace: statefulset-5654, name: ss-0, uid: 6a552dbb-0d1a-443b-a5f2-97680f2af908, status phase: Failed. Waiting for statefulset controller to delete.
Jan 29 14:00:46.656: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-5654
STEP: Removing pod with conflicting port in namespace statefulset-5654
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-5654 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Jan 29 14:00:56.894: INFO: Deleting all statefulset in ns statefulset-5654
Jan 29 14:00:56.903: INFO: Scaling statefulset ss to 0
Jan 29 14:01:06.960: INFO: Waiting for statefulset status.replicas updated to 0
Jan 29 14:01:06.966: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:01:06.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-5654" for this suite.
Jan 29 14:01:13.030: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:01:13.162: INFO: namespace statefulset-5654 deletion completed in 6.158543538s

• [SLOW TEST:42.329 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:01:13.163: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the initial replication controller
Jan 29 14:01:13.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7556'
Jan 29 14:01:15.686: INFO: stderr: ""
Jan 29 14:01:15.687: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 29 14:01:15.687: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7556'
Jan 29 14:01:15.890: INFO: stderr: ""
Jan 29 14:01:15.890: INFO: stdout: "update-demo-nautilus-6nbbf update-demo-nautilus-9khc2 "
Jan 29 14:01:15.891: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6nbbf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7556'
Jan 29 14:01:16.035: INFO: stderr: ""
Jan 29 14:01:16.035: INFO: stdout: ""
Jan 29 14:01:16.035: INFO: update-demo-nautilus-6nbbf is created but not running
Jan 29 14:01:21.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7556'
Jan 29 14:01:21.386: INFO: stderr: ""
Jan 29 14:01:21.386: INFO: stdout: "update-demo-nautilus-6nbbf update-demo-nautilus-9khc2 "
Jan 29 14:01:21.386: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6nbbf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7556'
Jan 29 14:01:21.764: INFO: stderr: ""
Jan 29 14:01:21.765: INFO: stdout: ""
Jan 29 14:01:21.765: INFO: update-demo-nautilus-6nbbf is created but not running
Jan 29 14:01:26.765: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7556'
Jan 29 14:01:26.933: INFO: stderr: ""
Jan 29 14:01:26.933: INFO: stdout: "update-demo-nautilus-6nbbf update-demo-nautilus-9khc2 "
Jan 29 14:01:26.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6nbbf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7556'
Jan 29 14:01:27.048: INFO: stderr: ""
Jan 29 14:01:27.048: INFO: stdout: "true"
Jan 29 14:01:27.049: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6nbbf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7556'
Jan 29 14:01:27.165: INFO: stderr: ""
Jan 29 14:01:27.165: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 29 14:01:27.165: INFO: validating pod update-demo-nautilus-6nbbf
Jan 29 14:01:27.176: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 29 14:01:27.176: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 29 14:01:27.176: INFO: update-demo-nautilus-6nbbf is verified up and running
Jan 29 14:01:27.176: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9khc2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7556'
Jan 29 14:01:27.296: INFO: stderr: ""
Jan 29 14:01:27.296: INFO: stdout: "true"
Jan 29 14:01:27.296: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9khc2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7556'
Jan 29 14:01:27.429: INFO: stderr: ""
Jan 29 14:01:27.429: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 29 14:01:27.429: INFO: validating pod update-demo-nautilus-9khc2
Jan 29 14:01:27.451: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 29 14:01:27.451: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 29 14:01:27.451: INFO: update-demo-nautilus-9khc2 is verified up and running
STEP: rolling-update to new replication controller
Jan 29 14:01:27.453: INFO: scanned /root for discovery docs: 
Jan 29 14:01:27.453: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-7556'
Jan 29 14:01:59.596: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan 29 14:01:59.597: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 29 14:01:59.598: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7556'
Jan 29 14:01:59.894: INFO: stderr: ""
Jan 29 14:01:59.894: INFO: stdout: "update-demo-kitten-mcrh8 update-demo-kitten-qgx7z "
Jan 29 14:01:59.895: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-mcrh8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7556'
Jan 29 14:01:59.995: INFO: stderr: ""
Jan 29 14:01:59.995: INFO: stdout: "true"
Jan 29 14:01:59.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-mcrh8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7556'
Jan 29 14:02:00.117: INFO: stderr: ""
Jan 29 14:02:00.117: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan 29 14:02:00.117: INFO: validating pod update-demo-kitten-mcrh8
Jan 29 14:02:00.135: INFO: got data: {
  "image": "kitten.jpg"
}

Jan 29 14:02:00.135: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan 29 14:02:00.135: INFO: update-demo-kitten-mcrh8 is verified up and running
Jan 29 14:02:00.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-qgx7z -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7556'
Jan 29 14:02:00.254: INFO: stderr: ""
Jan 29 14:02:00.254: INFO: stdout: "true"
Jan 29 14:02:00.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-qgx7z -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7556'
Jan 29 14:02:00.349: INFO: stderr: ""
Jan 29 14:02:00.349: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan 29 14:02:00.350: INFO: validating pod update-demo-kitten-qgx7z
Jan 29 14:02:00.534: INFO: got data: {
  "image": "kitten.jpg"
}

Jan 29 14:02:00.535: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan 29 14:02:00.535: INFO: update-demo-kitten-qgx7z is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:02:00.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7556" for this suite.
Jan 29 14:02:26.584: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:02:26.720: INFO: namespace kubectl-7556 deletion completed in 26.170095022s

• [SLOW TEST:73.558 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:02:26.721: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Jan 29 14:02:33.875: INFO: 1 pods remaining
Jan 29 14:02:33.875: INFO: 0 pods has nil DeletionTimestamp
Jan 29 14:02:33.875: INFO: 
STEP: Gathering metrics
W0129 14:02:34.401574       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 29 14:02:34.401: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:02:34.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2880" for this suite.
Jan 29 14:02:44.569: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:02:44.726: INFO: namespace gc-2880 deletion completed in 10.322184497s

• [SLOW TEST:18.005 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:02:44.726: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan 29 14:02:44.860: INFO: Waiting up to 5m0s for pod "pod-c8ede1da-4ffe-41b8-b87f-d5a42228b68c" in namespace "emptydir-5929" to be "success or failure"
Jan 29 14:02:44.879: INFO: Pod "pod-c8ede1da-4ffe-41b8-b87f-d5a42228b68c": Phase="Pending", Reason="", readiness=false. Elapsed: 19.135505ms
Jan 29 14:02:46.892: INFO: Pod "pod-c8ede1da-4ffe-41b8-b87f-d5a42228b68c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031995717s
Jan 29 14:02:48.907: INFO: Pod "pod-c8ede1da-4ffe-41b8-b87f-d5a42228b68c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046508356s
Jan 29 14:02:50.947: INFO: Pod "pod-c8ede1da-4ffe-41b8-b87f-d5a42228b68c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.086529568s
Jan 29 14:02:52.954: INFO: Pod "pod-c8ede1da-4ffe-41b8-b87f-d5a42228b68c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.093359854s
STEP: Saw pod success
Jan 29 14:02:52.954: INFO: Pod "pod-c8ede1da-4ffe-41b8-b87f-d5a42228b68c" satisfied condition "success or failure"
Jan 29 14:02:52.958: INFO: Trying to get logs from node iruya-node pod pod-c8ede1da-4ffe-41b8-b87f-d5a42228b68c container test-container: 
STEP: delete the pod
Jan 29 14:02:53.014: INFO: Waiting for pod pod-c8ede1da-4ffe-41b8-b87f-d5a42228b68c to disappear
Jan 29 14:02:53.019: INFO: Pod pod-c8ede1da-4ffe-41b8-b87f-d5a42228b68c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:02:53.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5929" for this suite.
Jan 29 14:02:59.043: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:02:59.136: INFO: namespace emptydir-5929 deletion completed in 6.111411715s

• [SLOW TEST:14.409 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:02:59.136: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 29 14:02:59.195: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-5482'
Jan 29 14:02:59.362: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 29 14:02:59.363: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Jan 29 14:02:59.404: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-j277v]
Jan 29 14:02:59.404: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-j277v" in namespace "kubectl-5482" to be "running and ready"
Jan 29 14:02:59.417: INFO: Pod "e2e-test-nginx-rc-j277v": Phase="Pending", Reason="", readiness=false. Elapsed: 12.916993ms
Jan 29 14:03:01.428: INFO: Pod "e2e-test-nginx-rc-j277v": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02316815s
Jan 29 14:03:03.438: INFO: Pod "e2e-test-nginx-rc-j277v": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033994231s
Jan 29 14:03:05.476: INFO: Pod "e2e-test-nginx-rc-j277v": Phase="Pending", Reason="", readiness=false. Elapsed: 6.071932764s
Jan 29 14:03:07.485: INFO: Pod "e2e-test-nginx-rc-j277v": Phase="Running", Reason="", readiness=true. Elapsed: 8.080634066s
Jan 29 14:03:07.485: INFO: Pod "e2e-test-nginx-rc-j277v" satisfied condition "running and ready"
Jan 29 14:03:07.485: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-j277v]
Jan 29 14:03:07.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-5482'
Jan 29 14:03:07.766: INFO: stderr: ""
Jan 29 14:03:07.766: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461
Jan 29 14:03:07.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-5482'
Jan 29 14:03:07.888: INFO: stderr: ""
Jan 29 14:03:07.888: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:03:07.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5482" for this suite.
Jan 29 14:03:21.919: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:03:22.000: INFO: namespace kubectl-5482 deletion completed in 14.106883193s

• [SLOW TEST:22.864 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:03:22.000: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 29 14:03:22.193: INFO: (0) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 35.088469ms)
Jan 29 14:03:22.201: INFO: (1) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.581221ms)
Jan 29 14:03:22.207: INFO: (2) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.803971ms)
Jan 29 14:03:22.213: INFO: (3) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.572142ms)
Jan 29 14:03:22.217: INFO: (4) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.742459ms)
Jan 29 14:03:22.222: INFO: (5) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.328507ms)
Jan 29 14:03:22.229: INFO: (6) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.943594ms)
Jan 29 14:03:22.235: INFO: (7) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.05368ms)
Jan 29 14:03:22.242: INFO: (8) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.297622ms)
Jan 29 14:03:22.276: INFO: (9) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 33.144456ms)
Jan 29 14:03:22.283: INFO: (10) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.803882ms)
Jan 29 14:03:22.291: INFO: (11) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.815814ms)
Jan 29 14:03:22.299: INFO: (12) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.874066ms)
Jan 29 14:03:22.308: INFO: (13) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.640278ms)
Jan 29 14:03:22.317: INFO: (14) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.772272ms)
Jan 29 14:03:22.324: INFO: (15) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.439408ms)
Jan 29 14:03:22.332: INFO: (16) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.803744ms)
Jan 29 14:03:22.338: INFO: (17) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.247323ms)
Jan 29 14:03:22.349: INFO: (18) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.56795ms)
Jan 29 14:03:22.358: INFO: (19) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.176856ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:03:22.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-3962" for this suite.
Jan 29 14:03:28.404: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:03:28.574: INFO: namespace proxy-3962 deletion completed in 6.207760073s

• [SLOW TEST:6.574 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:03:28.574: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Jan 29 14:03:28.736: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:03:42.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-4783" for this suite.
Jan 29 14:03:48.177: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:03:48.337: INFO: namespace init-container-4783 deletion completed in 6.191430833s

• [SLOW TEST:19.763 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:03:48.338: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan 29 14:06:46.989: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 29 14:06:47.039: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 29 14:06:49.040: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 29 14:06:49.045: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 29 14:06:51.040: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 29 14:06:51.054: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 29 14:06:53.040: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 29 14:06:53.051: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 29 14:06:55.040: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 29 14:06:55.047: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 29 14:06:57.040: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 29 14:06:57.050: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 29 14:06:59.040: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 29 14:06:59.047: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 29 14:07:01.040: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 29 14:07:01.053: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 29 14:07:03.040: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 29 14:07:03.047: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 29 14:07:05.040: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 29 14:07:05.050: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 29 14:07:07.040: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 29 14:07:07.048: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 29 14:07:09.040: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 29 14:07:09.047: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 29 14:07:11.040: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 29 14:07:11.052: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 29 14:07:13.040: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 29 14:07:13.048: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 29 14:07:15.040: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 29 14:07:15.051: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 29 14:07:17.040: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 29 14:07:17.050: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 29 14:07:19.040: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 29 14:07:19.049: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 29 14:07:21.040: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 29 14:07:21.048: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 29 14:07:23.040: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 29 14:07:23.046: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 29 14:07:25.040: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 29 14:07:25.051: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 29 14:07:27.040: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 29 14:07:27.047: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 29 14:07:29.040: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 29 14:07:29.050: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 29 14:07:31.040: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 29 14:07:31.050: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 29 14:07:33.040: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 29 14:07:33.045: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 29 14:07:35.040: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 29 14:07:35.054: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 29 14:07:37.040: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 29 14:07:37.054: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 29 14:07:39.040: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 29 14:07:39.053: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 29 14:07:41.040: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 29 14:07:41.049: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 29 14:07:43.040: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 29 14:07:43.047: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 29 14:07:45.040: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 29 14:07:45.053: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 29 14:07:47.040: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 29 14:07:47.048: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 29 14:07:49.040: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 29 14:07:49.054: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 29 14:07:51.040: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 29 14:07:51.048: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 29 14:07:53.040: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 29 14:07:53.045: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 29 14:07:55.040: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 29 14:07:55.048: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 29 14:07:57.040: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 29 14:07:57.051: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 29 14:07:59.040: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 29 14:07:59.049: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 29 14:08:01.040: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 29 14:08:01.050: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 29 14:08:03.040: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 29 14:08:03.049: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 29 14:08:05.040: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 29 14:08:05.049: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 29 14:08:07.040: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 29 14:08:07.048: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 29 14:08:09.040: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 29 14:08:09.050: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 29 14:08:11.040: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 29 14:08:11.047: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 29 14:08:13.040: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 29 14:08:13.052: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 29 14:08:15.040: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 29 14:08:15.049: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 29 14:08:17.040: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 29 14:08:17.047: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 29 14:08:19.040: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 29 14:08:19.049: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 29 14:08:21.040: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 29 14:08:21.049: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 29 14:08:23.040: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 29 14:08:23.051: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 29 14:08:25.040: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 29 14:08:25.059: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 29 14:08:27.042: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 29 14:08:27.048: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:08:27.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-9032" for this suite.
Jan 29 14:08:49.078: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:08:49.169: INFO: namespace container-lifecycle-hook-9032 deletion completed in 22.115843317s

• [SLOW TEST:300.831 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:08:49.169: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0129 14:09:01.128545       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 29 14:09:01.128: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:09:01.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6787" for this suite.
Jan 29 14:09:07.149: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:09:07.300: INFO: namespace gc-6787 deletion completed in 6.16959971s

• [SLOW TEST:18.131 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:09:07.301: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-3d60464b-6e03-4aae-b8c3-6d448015ed30
STEP: Creating a pod to test consume secrets
Jan 29 14:09:07.387: INFO: Waiting up to 5m0s for pod "pod-secrets-94541083-3465-456b-8545-a1f31f4fc5ff" in namespace "secrets-2091" to be "success or failure"
Jan 29 14:09:07.452: INFO: Pod "pod-secrets-94541083-3465-456b-8545-a1f31f4fc5ff": Phase="Pending", Reason="", readiness=false. Elapsed: 65.51945ms
Jan 29 14:09:09.462: INFO: Pod "pod-secrets-94541083-3465-456b-8545-a1f31f4fc5ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075331805s
Jan 29 14:09:11.484: INFO: Pod "pod-secrets-94541083-3465-456b-8545-a1f31f4fc5ff": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097078694s
Jan 29 14:09:13.492: INFO: Pod "pod-secrets-94541083-3465-456b-8545-a1f31f4fc5ff": Phase="Pending", Reason="", readiness=false. Elapsed: 6.10476066s
Jan 29 14:09:15.503: INFO: Pod "pod-secrets-94541083-3465-456b-8545-a1f31f4fc5ff": Phase="Pending", Reason="", readiness=false. Elapsed: 8.116194394s
Jan 29 14:09:17.512: INFO: Pod "pod-secrets-94541083-3465-456b-8545-a1f31f4fc5ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.125157035s
STEP: Saw pod success
Jan 29 14:09:17.512: INFO: Pod "pod-secrets-94541083-3465-456b-8545-a1f31f4fc5ff" satisfied condition "success or failure"
Jan 29 14:09:17.517: INFO: Trying to get logs from node iruya-node pod pod-secrets-94541083-3465-456b-8545-a1f31f4fc5ff container secret-volume-test: 
STEP: delete the pod
Jan 29 14:09:17.764: INFO: Waiting for pod pod-secrets-94541083-3465-456b-8545-a1f31f4fc5ff to disappear
Jan 29 14:09:17.786: INFO: Pod pod-secrets-94541083-3465-456b-8545-a1f31f4fc5ff no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:09:17.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2091" for this suite.
Jan 29 14:09:23.856: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:09:23.965: INFO: namespace secrets-2091 deletion completed in 6.131431734s

• [SLOW TEST:16.664 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:09:23.966: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:09:24.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7542" for this suite.
Jan 29 14:09:46.198: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:09:46.362: INFO: namespace pods-7542 deletion completed in 22.237358385s

• [SLOW TEST:22.396 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:09:46.362: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating server pod server in namespace prestop-629
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-629
STEP: Deleting pre-stop pod
Jan 29 14:10:07.616: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:10:07.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-629" for this suite.
Jan 29 14:10:47.783: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:10:47.953: INFO: namespace prestop-629 deletion completed in 40.313255714s

• [SLOW TEST:61.591 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:10:47.953: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-94a0a1c5-9934-45de-b0f9-314eb7372a41
STEP: Creating a pod to test consume configMaps
Jan 29 14:10:48.099: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-069d6baa-9a27-4b29-918a-b71fdb76cf29" in namespace "projected-3691" to be "success or failure"
Jan 29 14:10:48.143: INFO: Pod "pod-projected-configmaps-069d6baa-9a27-4b29-918a-b71fdb76cf29": Phase="Pending", Reason="", readiness=false. Elapsed: 44.273798ms
Jan 29 14:10:50.151: INFO: Pod "pod-projected-configmaps-069d6baa-9a27-4b29-918a-b71fdb76cf29": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051899427s
Jan 29 14:10:52.204: INFO: Pod "pod-projected-configmaps-069d6baa-9a27-4b29-918a-b71fdb76cf29": Phase="Pending", Reason="", readiness=false. Elapsed: 4.104868667s
Jan 29 14:10:54.216: INFO: Pod "pod-projected-configmaps-069d6baa-9a27-4b29-918a-b71fdb76cf29": Phase="Pending", Reason="", readiness=false. Elapsed: 6.116490941s
Jan 29 14:10:56.221: INFO: Pod "pod-projected-configmaps-069d6baa-9a27-4b29-918a-b71fdb76cf29": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.121923458s
STEP: Saw pod success
Jan 29 14:10:56.221: INFO: Pod "pod-projected-configmaps-069d6baa-9a27-4b29-918a-b71fdb76cf29" satisfied condition "success or failure"
Jan 29 14:10:56.224: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-069d6baa-9a27-4b29-918a-b71fdb76cf29 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 29 14:10:56.296: INFO: Waiting for pod pod-projected-configmaps-069d6baa-9a27-4b29-918a-b71fdb76cf29 to disappear
Jan 29 14:10:56.320: INFO: Pod pod-projected-configmaps-069d6baa-9a27-4b29-918a-b71fdb76cf29 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:10:56.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3691" for this suite.
Jan 29 14:11:02.497: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:11:02.671: INFO: namespace projected-3691 deletion completed in 6.342773222s

• [SLOW TEST:14.718 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:11:02.671: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan 29 14:11:09.810: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:11:09.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-2385" for this suite.
Jan 29 14:11:15.954: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:11:16.069: INFO: namespace container-runtime-2385 deletion completed in 6.147576898s

• [SLOW TEST:13.397 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:11:16.069: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating pod
Jan 29 14:11:24.228: INFO: Pod pod-hostip-c3cf9427-9b52-46c7-a8ec-0918fa831255 has hostIP: 10.96.3.65
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:11:24.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5184" for this suite.
Jan 29 14:11:46.257: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:11:46.396: INFO: namespace pods-5184 deletion completed in 22.162012059s

• [SLOW TEST:30.327 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:11:46.397: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-37fc2b3d-926b-453f-a390-73eaadccac84
STEP: Creating a pod to test consume configMaps
Jan 29 14:11:46.605: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-86b23fe8-5421-4ab8-9d59-bb0c899e309a" in namespace "projected-5739" to be "success or failure"
Jan 29 14:11:46.723: INFO: Pod "pod-projected-configmaps-86b23fe8-5421-4ab8-9d59-bb0c899e309a": Phase="Pending", Reason="", readiness=false. Elapsed: 117.725278ms
Jan 29 14:11:48.730: INFO: Pod "pod-projected-configmaps-86b23fe8-5421-4ab8-9d59-bb0c899e309a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.124662778s
Jan 29 14:11:50.754: INFO: Pod "pod-projected-configmaps-86b23fe8-5421-4ab8-9d59-bb0c899e309a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.148084396s
Jan 29 14:11:52.782: INFO: Pod "pod-projected-configmaps-86b23fe8-5421-4ab8-9d59-bb0c899e309a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.176484325s
Jan 29 14:11:54.795: INFO: Pod "pod-projected-configmaps-86b23fe8-5421-4ab8-9d59-bb0c899e309a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.18956543s
STEP: Saw pod success
Jan 29 14:11:54.796: INFO: Pod "pod-projected-configmaps-86b23fe8-5421-4ab8-9d59-bb0c899e309a" satisfied condition "success or failure"
Jan 29 14:11:54.800: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-86b23fe8-5421-4ab8-9d59-bb0c899e309a container projected-configmap-volume-test: 
STEP: delete the pod
Jan 29 14:11:54.915: INFO: Waiting for pod pod-projected-configmaps-86b23fe8-5421-4ab8-9d59-bb0c899e309a to disappear
Jan 29 14:11:54.921: INFO: Pod pod-projected-configmaps-86b23fe8-5421-4ab8-9d59-bb0c899e309a no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:11:54.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5739" for this suite.
Jan 29 14:12:00.950: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:12:01.060: INFO: namespace projected-5739 deletion completed in 6.135846503s

• [SLOW TEST:14.664 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:12:01.061: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service multi-endpoint-test in namespace services-6737
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6737 to expose endpoints map[]
Jan 29 14:12:01.311: INFO: Get endpoints failed (64.0476ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Jan 29 14:12:02.321: INFO: successfully validated that service multi-endpoint-test in namespace services-6737 exposes endpoints map[] (1.074930726s elapsed)
STEP: Creating pod pod1 in namespace services-6737
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6737 to expose endpoints map[pod1:[100]]
Jan 29 14:12:06.425: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.094765762s elapsed, will retry)
Jan 29 14:12:09.471: INFO: successfully validated that service multi-endpoint-test in namespace services-6737 exposes endpoints map[pod1:[100]] (7.140507968s elapsed)
STEP: Creating pod pod2 in namespace services-6737
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6737 to expose endpoints map[pod1:[100] pod2:[101]]
Jan 29 14:12:14.507: INFO: Unexpected endpoints: found map[67e94066-5d8d-4999-bcdb-5b9f51b4e275:[100]], expected map[pod1:[100] pod2:[101]] (5.032598141s elapsed, will retry)
Jan 29 14:12:17.584: INFO: successfully validated that service multi-endpoint-test in namespace services-6737 exposes endpoints map[pod1:[100] pod2:[101]] (8.109595027s elapsed)
STEP: Deleting pod pod1 in namespace services-6737
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6737 to expose endpoints map[pod2:[101]]
Jan 29 14:12:18.740: INFO: successfully validated that service multi-endpoint-test in namespace services-6737 exposes endpoints map[pod2:[101]] (1.147993679s elapsed)
STEP: Deleting pod pod2 in namespace services-6737
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6737 to expose endpoints map[]
Jan 29 14:12:20.077: INFO: successfully validated that service multi-endpoint-test in namespace services-6737 exposes endpoints map[] (1.324660904s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:12:20.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6737" for this suite.
Jan 29 14:12:42.649: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:12:42.755: INFO: namespace services-6737 deletion completed in 22.220635889s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:41.694 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:12:42.755: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:12:50.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-6499" for this suite.
Jan 29 14:13:34.990: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:13:35.098: INFO: namespace kubelet-test-6499 deletion completed in 44.137504195s

• [SLOW TEST:52.343 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:13:35.098: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:13:45.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-7614" for this suite.
Jan 29 14:14:47.257: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:14:47.385: INFO: namespace kubelet-test-7614 deletion completed in 1m2.160720306s

• [SLOW TEST:72.287 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:14:47.385: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:14:54.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-1860" for this suite.
Jan 29 14:15:00.079: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:15:00.183: INFO: namespace namespaces-1860 deletion completed in 6.150615388s
STEP: Destroying namespace "nsdeletetest-9043" for this suite.
Jan 29 14:15:00.185: INFO: Namespace nsdeletetest-9043 was already deleted
STEP: Destroying namespace "nsdeletetest-8435" for this suite.
Jan 29 14:15:06.244: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:15:06.360: INFO: namespace nsdeletetest-8435 deletion completed in 6.175262218s

• [SLOW TEST:18.975 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:15:06.361: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-d7d185f5-8ba6-449c-abea-a31b41479d1b
STEP: Creating a pod to test consume secrets
Jan 29 14:15:06.484: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-97c50297-5458-480e-9321-11a3f83d9de9" in namespace "projected-6372" to be "success or failure"
Jan 29 14:15:06.495: INFO: Pod "pod-projected-secrets-97c50297-5458-480e-9321-11a3f83d9de9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.604136ms
Jan 29 14:15:08.512: INFO: Pod "pod-projected-secrets-97c50297-5458-480e-9321-11a3f83d9de9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02806246s
Jan 29 14:15:10.528: INFO: Pod "pod-projected-secrets-97c50297-5458-480e-9321-11a3f83d9de9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043726833s
Jan 29 14:15:12.540: INFO: Pod "pod-projected-secrets-97c50297-5458-480e-9321-11a3f83d9de9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056233179s
Jan 29 14:15:14.552: INFO: Pod "pod-projected-secrets-97c50297-5458-480e-9321-11a3f83d9de9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.068114463s
STEP: Saw pod success
Jan 29 14:15:14.552: INFO: Pod "pod-projected-secrets-97c50297-5458-480e-9321-11a3f83d9de9" satisfied condition "success or failure"
Jan 29 14:15:14.560: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-97c50297-5458-480e-9321-11a3f83d9de9 container projected-secret-volume-test: 
STEP: delete the pod
Jan 29 14:15:14.665: INFO: Waiting for pod pod-projected-secrets-97c50297-5458-480e-9321-11a3f83d9de9 to disappear
Jan 29 14:15:14.677: INFO: Pod pod-projected-secrets-97c50297-5458-480e-9321-11a3f83d9de9 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:15:14.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6372" for this suite.
Jan 29 14:15:20.725: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:15:20.877: INFO: namespace projected-6372 deletion completed in 6.183036903s

• [SLOW TEST:14.517 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:15:20.878: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:15:29.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-4806" for this suite.
Jan 29 14:15:35.426: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:15:35.568: INFO: namespace kubelet-test-4806 deletion completed in 6.168270465s

• [SLOW TEST:14.690 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:15:35.568: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Jan 29 14:15:57.830: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9896 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 29 14:15:57.831: INFO: >>> kubeConfig: /root/.kube/config
I0129 14:15:57.927959       8 log.go:172] (0xc0007af550) (0xc0020a6960) Create stream
I0129 14:15:57.928041       8 log.go:172] (0xc0007af550) (0xc0020a6960) Stream added, broadcasting: 1
I0129 14:15:57.939174       8 log.go:172] (0xc0007af550) Reply frame received for 1
I0129 14:15:57.939445       8 log.go:172] (0xc0007af550) (0xc0027ac000) Create stream
I0129 14:15:57.939481       8 log.go:172] (0xc0007af550) (0xc0027ac000) Stream added, broadcasting: 3
I0129 14:15:57.942796       8 log.go:172] (0xc0007af550) Reply frame received for 3
I0129 14:15:57.942831       8 log.go:172] (0xc0007af550) (0xc001ca8000) Create stream
I0129 14:15:57.942846       8 log.go:172] (0xc0007af550) (0xc001ca8000) Stream added, broadcasting: 5
I0129 14:15:57.945437       8 log.go:172] (0xc0007af550) Reply frame received for 5
I0129 14:15:58.083856       8 log.go:172] (0xc0007af550) Data frame received for 3
I0129 14:15:58.083996       8 log.go:172] (0xc0027ac000) (3) Data frame handling
I0129 14:15:58.084017       8 log.go:172] (0xc0027ac000) (3) Data frame sent
I0129 14:15:58.244860       8 log.go:172] (0xc0007af550) Data frame received for 1
I0129 14:15:58.244963       8 log.go:172] (0xc0020a6960) (1) Data frame handling
I0129 14:15:58.244988       8 log.go:172] (0xc0020a6960) (1) Data frame sent
I0129 14:15:58.246828       8 log.go:172] (0xc0007af550) (0xc0020a6960) Stream removed, broadcasting: 1
I0129 14:15:58.250418       8 log.go:172] (0xc0007af550) (0xc001ca8000) Stream removed, broadcasting: 5
I0129 14:15:58.250492       8 log.go:172] (0xc0007af550) (0xc0027ac000) Stream removed, broadcasting: 3
I0129 14:15:58.250529       8 log.go:172] (0xc0007af550) (0xc0020a6960) Stream removed, broadcasting: 1
I0129 14:15:58.250543       8 log.go:172] (0xc0007af550) (0xc0027ac000) Stream removed, broadcasting: 3
I0129 14:15:58.250571       8 log.go:172] (0xc0007af550) (0xc001ca8000) Stream removed, broadcasting: 5
Jan 29 14:15:58.251: INFO: Exec stderr: ""
I0129 14:15:58.251693       8 log.go:172] (0xc0007af550) Go away received
Jan 29 14:15:58.251: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9896 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 29 14:15:58.251: INFO: >>> kubeConfig: /root/.kube/config
I0129 14:15:58.319729       8 log.go:172] (0xc0009a60b0) (0xc0020a6c80) Create stream
I0129 14:15:58.319807       8 log.go:172] (0xc0009a60b0) (0xc0020a6c80) Stream added, broadcasting: 1
I0129 14:15:58.328740       8 log.go:172] (0xc0009a60b0) Reply frame received for 1
I0129 14:15:58.328779       8 log.go:172] (0xc0009a60b0) (0xc001ca80a0) Create stream
I0129 14:15:58.328791       8 log.go:172] (0xc0009a60b0) (0xc001ca80a0) Stream added, broadcasting: 3
I0129 14:15:58.331144       8 log.go:172] (0xc0009a60b0) Reply frame received for 3
I0129 14:15:58.331170       8 log.go:172] (0xc0009a60b0) (0xc0027ac0a0) Create stream
I0129 14:15:58.331179       8 log.go:172] (0xc0009a60b0) (0xc0027ac0a0) Stream added, broadcasting: 5
I0129 14:15:58.332451       8 log.go:172] (0xc0009a60b0) Reply frame received for 5
I0129 14:15:58.412372       8 log.go:172] (0xc0009a60b0) Data frame received for 3
I0129 14:15:58.412727       8 log.go:172] (0xc001ca80a0) (3) Data frame handling
I0129 14:15:58.412828       8 log.go:172] (0xc001ca80a0) (3) Data frame sent
I0129 14:15:58.619677       8 log.go:172] (0xc0009a60b0) Data frame received for 1
I0129 14:15:58.619803       8 log.go:172] (0xc0009a60b0) (0xc0027ac0a0) Stream removed, broadcasting: 5
I0129 14:15:58.619858       8 log.go:172] (0xc0020a6c80) (1) Data frame handling
I0129 14:15:58.619899       8 log.go:172] (0xc0020a6c80) (1) Data frame sent
I0129 14:15:58.619935       8 log.go:172] (0xc0009a60b0) (0xc001ca80a0) Stream removed, broadcasting: 3
I0129 14:15:58.619967       8 log.go:172] (0xc0009a60b0) (0xc0020a6c80) Stream removed, broadcasting: 1
I0129 14:15:58.619987       8 log.go:172] (0xc0009a60b0) Go away received
I0129 14:15:58.620444       8 log.go:172] (0xc0009a60b0) (0xc0020a6c80) Stream removed, broadcasting: 1
I0129 14:15:58.620470       8 log.go:172] (0xc0009a60b0) (0xc001ca80a0) Stream removed, broadcasting: 3
I0129 14:15:58.620481       8 log.go:172] (0xc0009a60b0) (0xc0027ac0a0) Stream removed, broadcasting: 5
Jan 29 14:15:58.620: INFO: Exec stderr: ""
Jan 29 14:15:58.620: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9896 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 29 14:15:58.620: INFO: >>> kubeConfig: /root/.kube/config
I0129 14:15:58.681583       8 log.go:172] (0xc000d6ac60) (0xc0027ac500) Create stream
I0129 14:15:58.681683       8 log.go:172] (0xc000d6ac60) (0xc0027ac500) Stream added, broadcasting: 1
I0129 14:15:58.689597       8 log.go:172] (0xc000d6ac60) Reply frame received for 1
I0129 14:15:58.689631       8 log.go:172] (0xc000d6ac60) (0xc0027ac5a0) Create stream
I0129 14:15:58.689638       8 log.go:172] (0xc000d6ac60) (0xc0027ac5a0) Stream added, broadcasting: 3
I0129 14:15:58.691257       8 log.go:172] (0xc000d6ac60) Reply frame received for 3
I0129 14:15:58.691397       8 log.go:172] (0xc000d6ac60) (0xc00140a5a0) Create stream
I0129 14:15:58.691409       8 log.go:172] (0xc000d6ac60) (0xc00140a5a0) Stream added, broadcasting: 5
I0129 14:15:58.692928       8 log.go:172] (0xc000d6ac60) Reply frame received for 5
I0129 14:15:58.780142       8 log.go:172] (0xc000d6ac60) Data frame received for 3
I0129 14:15:58.780247       8 log.go:172] (0xc0027ac5a0) (3) Data frame handling
I0129 14:15:58.780265       8 log.go:172] (0xc0027ac5a0) (3) Data frame sent
I0129 14:15:58.908905       8 log.go:172] (0xc000d6ac60) Data frame received for 1
I0129 14:15:58.909117       8 log.go:172] (0xc000d6ac60) (0xc00140a5a0) Stream removed, broadcasting: 5
I0129 14:15:58.909184       8 log.go:172] (0xc0027ac500) (1) Data frame handling
I0129 14:15:58.909201       8 log.go:172] (0xc0027ac500) (1) Data frame sent
I0129 14:15:58.909250       8 log.go:172] (0xc000d6ac60) (0xc0027ac5a0) Stream removed, broadcasting: 3
I0129 14:15:58.909300       8 log.go:172] (0xc000d6ac60) (0xc0027ac500) Stream removed, broadcasting: 1
I0129 14:15:58.909318       8 log.go:172] (0xc000d6ac60) Go away received
I0129 14:15:58.910042       8 log.go:172] (0xc000d6ac60) (0xc0027ac500) Stream removed, broadcasting: 1
I0129 14:15:58.910208       8 log.go:172] (0xc000d6ac60) (0xc0027ac5a0) Stream removed, broadcasting: 3
I0129 14:15:58.910225       8 log.go:172] (0xc000d6ac60) (0xc00140a5a0) Stream removed, broadcasting: 5
Jan 29 14:15:58.910: INFO: Exec stderr: ""
Jan 29 14:15:58.910: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9896 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 29 14:15:58.910: INFO: >>> kubeConfig: /root/.kube/config
I0129 14:15:59.004569       8 log.go:172] (0xc0009a6e70) (0xc0020a7040) Create stream
I0129 14:15:59.004728       8 log.go:172] (0xc0009a6e70) (0xc0020a7040) Stream added, broadcasting: 1
I0129 14:15:59.011170       8 log.go:172] (0xc0009a6e70) Reply frame received for 1
I0129 14:15:59.011238       8 log.go:172] (0xc0009a6e70) (0xc001ca8140) Create stream
I0129 14:15:59.011248       8 log.go:172] (0xc0009a6e70) (0xc001ca8140) Stream added, broadcasting: 3
I0129 14:15:59.012919       8 log.go:172] (0xc0009a6e70) Reply frame received for 3
I0129 14:15:59.012955       8 log.go:172] (0xc0009a6e70) (0xc001ca8280) Create stream
I0129 14:15:59.012966       8 log.go:172] (0xc0009a6e70) (0xc001ca8280) Stream added, broadcasting: 5
I0129 14:15:59.014828       8 log.go:172] (0xc0009a6e70) Reply frame received for 5
I0129 14:15:59.111384       8 log.go:172] (0xc0009a6e70) Data frame received for 3
I0129 14:15:59.111422       8 log.go:172] (0xc001ca8140) (3) Data frame handling
I0129 14:15:59.111450       8 log.go:172] (0xc001ca8140) (3) Data frame sent
I0129 14:15:59.263760       8 log.go:172] (0xc0009a6e70) Data frame received for 1
I0129 14:15:59.263912       8 log.go:172] (0xc0020a7040) (1) Data frame handling
I0129 14:15:59.263967       8 log.go:172] (0xc0020a7040) (1) Data frame sent
I0129 14:15:59.264015       8 log.go:172] (0xc0009a6e70) (0xc0020a7040) Stream removed, broadcasting: 1
I0129 14:15:59.264083       8 log.go:172] (0xc0009a6e70) (0xc001ca8140) Stream removed, broadcasting: 3
I0129 14:15:59.264415       8 log.go:172] (0xc0009a6e70) (0xc001ca8280) Stream removed, broadcasting: 5
I0129 14:15:59.264468       8 log.go:172] (0xc0009a6e70) (0xc0020a7040) Stream removed, broadcasting: 1
I0129 14:15:59.264489       8 log.go:172] (0xc0009a6e70) (0xc001ca8140) Stream removed, broadcasting: 3
I0129 14:15:59.264506       8 log.go:172] (0xc0009a6e70) (0xc001ca8280) Stream removed, broadcasting: 5
I0129 14:15:59.264811       8 log.go:172] (0xc0009a6e70) Go away received
Jan 29 14:15:59.265: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Jan 29 14:15:59.265: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9896 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 29 14:15:59.265: INFO: >>> kubeConfig: /root/.kube/config
I0129 14:15:59.324121       8 log.go:172] (0xc0009a7810) (0xc0020a72c0) Create stream
I0129 14:15:59.324181       8 log.go:172] (0xc0009a7810) (0xc0020a72c0) Stream added, broadcasting: 1
I0129 14:15:59.329587       8 log.go:172] (0xc0009a7810) Reply frame received for 1
I0129 14:15:59.329640       8 log.go:172] (0xc0009a7810) (0xc0020a7360) Create stream
I0129 14:15:59.329657       8 log.go:172] (0xc0009a7810) (0xc0020a7360) Stream added, broadcasting: 3
I0129 14:15:59.331537       8 log.go:172] (0xc0009a7810) Reply frame received for 3
I0129 14:15:59.331556       8 log.go:172] (0xc0009a7810) (0xc001d4e000) Create stream
I0129 14:15:59.331564       8 log.go:172] (0xc0009a7810) (0xc001d4e000) Stream added, broadcasting: 5
I0129 14:15:59.332651       8 log.go:172] (0xc0009a7810) Reply frame received for 5
I0129 14:15:59.422342       8 log.go:172] (0xc0009a7810) Data frame received for 3
I0129 14:15:59.422675       8 log.go:172] (0xc0020a7360) (3) Data frame handling
I0129 14:15:59.422707       8 log.go:172] (0xc0020a7360) (3) Data frame sent
I0129 14:15:59.548712       8 log.go:172] (0xc0009a7810) (0xc0020a7360) Stream removed, broadcasting: 3
I0129 14:15:59.548845       8 log.go:172] (0xc0009a7810) Data frame received for 1
I0129 14:15:59.548872       8 log.go:172] (0xc0020a72c0) (1) Data frame handling
I0129 14:15:59.548886       8 log.go:172] (0xc0009a7810) (0xc001d4e000) Stream removed, broadcasting: 5
I0129 14:15:59.548916       8 log.go:172] (0xc0020a72c0) (1) Data frame sent
I0129 14:15:59.548935       8 log.go:172] (0xc0009a7810) (0xc0020a72c0) Stream removed, broadcasting: 1
I0129 14:15:59.549050       8 log.go:172] (0xc0009a7810) (0xc0020a72c0) Stream removed, broadcasting: 1
I0129 14:15:59.549067       8 log.go:172] (0xc0009a7810) (0xc0020a7360) Stream removed, broadcasting: 3
I0129 14:15:59.549079       8 log.go:172] (0xc0009a7810) (0xc001d4e000) Stream removed, broadcasting: 5
Jan 29 14:15:59.549: INFO: Exec stderr: ""
Jan 29 14:15:59.549: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9896 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 29 14:15:59.549: INFO: >>> kubeConfig: /root/.kube/config
I0129 14:15:59.643208       8 log.go:172] (0xc0011488f0) (0xc0020a7720) Create stream
I0129 14:15:59.643332       8 log.go:172] (0xc0011488f0) (0xc0020a7720) Stream added, broadcasting: 1
I0129 14:15:59.651063       8 log.go:172] (0xc0011488f0) Reply frame received for 1
I0129 14:15:59.651132       8 log.go:172] (0xc0011488f0) (0xc00140a640) Create stream
I0129 14:15:59.651143       8 log.go:172] (0xc0011488f0) (0xc00140a640) Stream added, broadcasting: 3
I0129 14:15:59.652893       8 log.go:172] (0xc0011488f0) Reply frame received for 3
I0129 14:15:59.652929       8 log.go:172] (0xc0011488f0) (0xc0027ac640) Create stream
I0129 14:15:59.652948       8 log.go:172] (0xc0011488f0) (0xc0027ac640) Stream added, broadcasting: 5
I0129 14:15:59.654416       8 log.go:172] (0xc0011488f0) Reply frame received for 5
I0129 14:15:59.751907       8 log.go:172] (0xc0011488f0) Data frame received for 3
I0129 14:15:59.752002       8 log.go:172] (0xc00140a640) (3) Data frame handling
I0129 14:15:59.752052       8 log.go:172] (0xc00140a640) (3) Data frame sent
I0129 14:15:59.915863       8 log.go:172] (0xc0011488f0) (0xc00140a640) Stream removed, broadcasting: 3
I0129 14:15:59.916174       8 log.go:172] (0xc0011488f0) (0xc0027ac640) Stream removed, broadcasting: 5
I0129 14:15:59.916221       8 log.go:172] (0xc0011488f0) Data frame received for 1
I0129 14:15:59.916261       8 log.go:172] (0xc0020a7720) (1) Data frame handling
I0129 14:15:59.916290       8 log.go:172] (0xc0020a7720) (1) Data frame sent
I0129 14:15:59.916479       8 log.go:172] (0xc0011488f0) (0xc0020a7720) Stream removed, broadcasting: 1
I0129 14:15:59.916615       8 log.go:172] (0xc0011488f0) Go away received
I0129 14:15:59.916942       8 log.go:172] (0xc0011488f0) (0xc0020a7720) Stream removed, broadcasting: 1
I0129 14:15:59.916977       8 log.go:172] (0xc0011488f0) (0xc00140a640) Stream removed, broadcasting: 3
I0129 14:15:59.917002       8 log.go:172] (0xc0011488f0) (0xc0027ac640) Stream removed, broadcasting: 5
Jan 29 14:15:59.917: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Jan 29 14:15:59.917: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9896 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 29 14:15:59.917: INFO: >>> kubeConfig: /root/.kube/config
I0129 14:15:59.993370       8 log.go:172] (0xc0012560b0) (0xc0027acd20) Create stream
I0129 14:15:59.993534       8 log.go:172] (0xc0012560b0) (0xc0027acd20) Stream added, broadcasting: 1
I0129 14:16:00.073001       8 log.go:172] (0xc0012560b0) Reply frame received for 1
I0129 14:16:00.073171       8 log.go:172] (0xc0012560b0) (0xc001ca8500) Create stream
I0129 14:16:00.073210       8 log.go:172] (0xc0012560b0) (0xc001ca8500) Stream added, broadcasting: 3
I0129 14:16:00.076691       8 log.go:172] (0xc0012560b0) Reply frame received for 3
I0129 14:16:00.076721       8 log.go:172] (0xc0012560b0) (0xc0020a77c0) Create stream
I0129 14:16:00.076730       8 log.go:172] (0xc0012560b0) (0xc0020a77c0) Stream added, broadcasting: 5
I0129 14:16:00.079135       8 log.go:172] (0xc0012560b0) Reply frame received for 5
I0129 14:16:00.247624       8 log.go:172] (0xc0012560b0) Data frame received for 3
I0129 14:16:00.247689       8 log.go:172] (0xc001ca8500) (3) Data frame handling
I0129 14:16:00.247701       8 log.go:172] (0xc001ca8500) (3) Data frame sent
I0129 14:16:00.368203       8 log.go:172] (0xc0012560b0) (0xc001ca8500) Stream removed, broadcasting: 3
I0129 14:16:00.368433       8 log.go:172] (0xc0012560b0) Data frame received for 1
I0129 14:16:00.368487       8 log.go:172] (0xc0012560b0) (0xc0020a77c0) Stream removed, broadcasting: 5
I0129 14:16:00.368523       8 log.go:172] (0xc0027acd20) (1) Data frame handling
I0129 14:16:00.368539       8 log.go:172] (0xc0027acd20) (1) Data frame sent
I0129 14:16:00.368545       8 log.go:172] (0xc0012560b0) (0xc0027acd20) Stream removed, broadcasting: 1
I0129 14:16:00.368565       8 log.go:172] (0xc0012560b0) Go away received
I0129 14:16:00.368711       8 log.go:172] (0xc0012560b0) (0xc0027acd20) Stream removed, broadcasting: 1
I0129 14:16:00.368726       8 log.go:172] (0xc0012560b0) (0xc001ca8500) Stream removed, broadcasting: 3
I0129 14:16:00.368731       8 log.go:172] (0xc0012560b0) (0xc0020a77c0) Stream removed, broadcasting: 5
Jan 29 14:16:00.368: INFO: Exec stderr: ""
Jan 29 14:16:00.368: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9896 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 29 14:16:00.369: INFO: >>> kubeConfig: /root/.kube/config
I0129 14:16:00.416943       8 log.go:172] (0xc000f36bb0) (0xc00140b180) Create stream
I0129 14:16:00.416999       8 log.go:172] (0xc000f36bb0) (0xc00140b180) Stream added, broadcasting: 1
I0129 14:16:00.421564       8 log.go:172] (0xc000f36bb0) Reply frame received for 1
I0129 14:16:00.421611       8 log.go:172] (0xc000f36bb0) (0xc001d4e0a0) Create stream
I0129 14:16:00.421624       8 log.go:172] (0xc000f36bb0) (0xc001d4e0a0) Stream added, broadcasting: 3
I0129 14:16:00.424716       8 log.go:172] (0xc000f36bb0) Reply frame received for 3
I0129 14:16:00.424794       8 log.go:172] (0xc000f36bb0) (0xc001d4e140) Create stream
I0129 14:16:00.424810       8 log.go:172] (0xc000f36bb0) (0xc001d4e140) Stream added, broadcasting: 5
I0129 14:16:00.429252       8 log.go:172] (0xc000f36bb0) Reply frame received for 5
I0129 14:16:00.539224       8 log.go:172] (0xc000f36bb0) Data frame received for 3
I0129 14:16:00.539369       8 log.go:172] (0xc001d4e0a0) (3) Data frame handling
I0129 14:16:00.539394       8 log.go:172] (0xc001d4e0a0) (3) Data frame sent
I0129 14:16:00.692757       8 log.go:172] (0xc000f36bb0) Data frame received for 1
I0129 14:16:00.693086       8 log.go:172] (0xc000f36bb0) (0xc001d4e0a0) Stream removed, broadcasting: 3
I0129 14:16:00.693203       8 log.go:172] (0xc00140b180) (1) Data frame handling
I0129 14:16:00.693230       8 log.go:172] (0xc00140b180) (1) Data frame sent
I0129 14:16:00.693259       8 log.go:172] (0xc000f36bb0) (0xc001d4e140) Stream removed, broadcasting: 5
I0129 14:16:00.693352       8 log.go:172] (0xc000f36bb0) (0xc00140b180) Stream removed, broadcasting: 1
I0129 14:16:00.693387       8 log.go:172] (0xc000f36bb0) Go away received
I0129 14:16:00.693881       8 log.go:172] (0xc000f36bb0) (0xc00140b180) Stream removed, broadcasting: 1
I0129 14:16:00.693899       8 log.go:172] (0xc000f36bb0) (0xc001d4e0a0) Stream removed, broadcasting: 3
I0129 14:16:00.693909       8 log.go:172] (0xc000f36bb0) (0xc001d4e140) Stream removed, broadcasting: 5
Jan 29 14:16:00.693: INFO: Exec stderr: ""
Jan 29 14:16:00.694: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9896 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 29 14:16:00.694: INFO: >>> kubeConfig: /root/.kube/config
I0129 14:16:00.756981       8 log.go:172] (0xc0010531e0) (0xc001d4e5a0) Create stream
I0129 14:16:00.757070       8 log.go:172] (0xc0010531e0) (0xc001d4e5a0) Stream added, broadcasting: 1
I0129 14:16:00.762220       8 log.go:172] (0xc0010531e0) Reply frame received for 1
I0129 14:16:00.762251       8 log.go:172] (0xc0010531e0) (0xc00140b220) Create stream
I0129 14:16:00.762263       8 log.go:172] (0xc0010531e0) (0xc00140b220) Stream added, broadcasting: 3
I0129 14:16:00.765101       8 log.go:172] (0xc0010531e0) Reply frame received for 3
I0129 14:16:00.765192       8 log.go:172] (0xc0010531e0) (0xc0027acdc0) Create stream
I0129 14:16:00.765206       8 log.go:172] (0xc0010531e0) (0xc0027acdc0) Stream added, broadcasting: 5
I0129 14:16:00.766814       8 log.go:172] (0xc0010531e0) Reply frame received for 5
I0129 14:16:00.863195       8 log.go:172] (0xc0010531e0) Data frame received for 3
I0129 14:16:00.863356       8 log.go:172] (0xc00140b220) (3) Data frame handling
I0129 14:16:00.863468       8 log.go:172] (0xc00140b220) (3) Data frame sent
I0129 14:16:01.029282       8 log.go:172] (0xc0010531e0) Data frame received for 1
I0129 14:16:01.029478       8 log.go:172] (0xc0010531e0) (0xc00140b220) Stream removed, broadcasting: 3
I0129 14:16:01.029538       8 log.go:172] (0xc001d4e5a0) (1) Data frame handling
I0129 14:16:01.029591       8 log.go:172] (0xc0010531e0) (0xc0027acdc0) Stream removed, broadcasting: 5
I0129 14:16:01.029655       8 log.go:172] (0xc001d4e5a0) (1) Data frame sent
I0129 14:16:01.029676       8 log.go:172] (0xc0010531e0) (0xc001d4e5a0) Stream removed, broadcasting: 1
I0129 14:16:01.029749       8 log.go:172] (0xc0010531e0) Go away received
I0129 14:16:01.029985       8 log.go:172] (0xc0010531e0) (0xc001d4e5a0) Stream removed, broadcasting: 1
I0129 14:16:01.030000       8 log.go:172] (0xc0010531e0) (0xc00140b220) Stream removed, broadcasting: 3
I0129 14:16:01.030009       8 log.go:172] (0xc0010531e0) (0xc0027acdc0) Stream removed, broadcasting: 5
Jan 29 14:16:01.030: INFO: Exec stderr: ""
Jan 29 14:16:01.030: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9896 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 29 14:16:01.030: INFO: >>> kubeConfig: /root/.kube/config
I0129 14:16:01.112490       8 log.go:172] (0xc001053b80) (0xc001d4eaa0) Create stream
I0129 14:16:01.112602       8 log.go:172] (0xc001053b80) (0xc001d4eaa0) Stream added, broadcasting: 1
I0129 14:16:01.120687       8 log.go:172] (0xc001053b80) Reply frame received for 1
I0129 14:16:01.120736       8 log.go:172] (0xc001053b80) (0xc0027ace60) Create stream
I0129 14:16:01.120745       8 log.go:172] (0xc001053b80) (0xc0027ace60) Stream added, broadcasting: 3
I0129 14:16:01.123030       8 log.go:172] (0xc001053b80) Reply frame received for 3
I0129 14:16:01.123079       8 log.go:172] (0xc001053b80) (0xc00140b360) Create stream
I0129 14:16:01.123086       8 log.go:172] (0xc001053b80) (0xc00140b360) Stream added, broadcasting: 5
I0129 14:16:01.124534       8 log.go:172] (0xc001053b80) Reply frame received for 5
I0129 14:16:01.263242       8 log.go:172] (0xc001053b80) Data frame received for 3
I0129 14:16:01.263364       8 log.go:172] (0xc0027ace60) (3) Data frame handling
I0129 14:16:01.263426       8 log.go:172] (0xc0027ace60) (3) Data frame sent
I0129 14:16:01.386167       8 log.go:172] (0xc001053b80) Data frame received for 1
I0129 14:16:01.386270       8 log.go:172] (0xc001053b80) (0xc0027ace60) Stream removed, broadcasting: 3
I0129 14:16:01.386315       8 log.go:172] (0xc001d4eaa0) (1) Data frame handling
I0129 14:16:01.386335       8 log.go:172] (0xc001d4eaa0) (1) Data frame sent
I0129 14:16:01.386363       8 log.go:172] (0xc001053b80) (0xc00140b360) Stream removed, broadcasting: 5
I0129 14:16:01.386411       8 log.go:172] (0xc001053b80) (0xc001d4eaa0) Stream removed, broadcasting: 1
I0129 14:16:01.386471       8 log.go:172] (0xc001053b80) Go away received
I0129 14:16:01.386941       8 log.go:172] (0xc001053b80) (0xc001d4eaa0) Stream removed, broadcasting: 1
I0129 14:16:01.386979       8 log.go:172] (0xc001053b80) (0xc0027ace60) Stream removed, broadcasting: 3
I0129 14:16:01.387021       8 log.go:172] (0xc001053b80) (0xc00140b360) Stream removed, broadcasting: 5
Jan 29 14:16:01.387: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:16:01.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-9896" for this suite.
Jan 29 14:16:45.436: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:16:45.586: INFO: namespace e2e-kubelet-etc-hosts-9896 deletion completed in 44.188294949s

• [SLOW TEST:70.018 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:16:45.587: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 29 14:16:45.734: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c4a4a893-972e-472e-adad-6db1fcf9f610" in namespace "projected-5260" to be "success or failure"
Jan 29 14:16:45.807: INFO: Pod "downwardapi-volume-c4a4a893-972e-472e-adad-6db1fcf9f610": Phase="Pending", Reason="", readiness=false. Elapsed: 72.89485ms
Jan 29 14:16:47.828: INFO: Pod "downwardapi-volume-c4a4a893-972e-472e-adad-6db1fcf9f610": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093785465s
Jan 29 14:16:49.868: INFO: Pod "downwardapi-volume-c4a4a893-972e-472e-adad-6db1fcf9f610": Phase="Pending", Reason="", readiness=false. Elapsed: 4.134203515s
Jan 29 14:16:51.877: INFO: Pod "downwardapi-volume-c4a4a893-972e-472e-adad-6db1fcf9f610": Phase="Pending", Reason="", readiness=false. Elapsed: 6.142454978s
Jan 29 14:16:53.889: INFO: Pod "downwardapi-volume-c4a4a893-972e-472e-adad-6db1fcf9f610": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.154279397s
STEP: Saw pod success
Jan 29 14:16:53.889: INFO: Pod "downwardapi-volume-c4a4a893-972e-472e-adad-6db1fcf9f610" satisfied condition "success or failure"
Jan 29 14:16:53.892: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-c4a4a893-972e-472e-adad-6db1fcf9f610 container client-container: 
STEP: delete the pod
Jan 29 14:16:53.955: INFO: Waiting for pod downwardapi-volume-c4a4a893-972e-472e-adad-6db1fcf9f610 to disappear
Jan 29 14:16:54.043: INFO: Pod downwardapi-volume-c4a4a893-972e-472e-adad-6db1fcf9f610 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:16:54.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5260" for this suite.
Jan 29 14:17:00.072: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:17:00.202: INFO: namespace projected-5260 deletion completed in 6.150357518s

• [SLOW TEST:14.615 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:17:00.202: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 29 14:17:00.345: INFO: Pod name rollover-pod: Found 0 pods out of 1
Jan 29 14:17:05.352: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan 29 14:17:07.399: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Jan 29 14:17:09.408: INFO: Creating deployment "test-rollover-deployment"
Jan 29 14:17:09.506: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Jan 29 14:17:11.519: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Jan 29 14:17:11.528: INFO: Ensure that both replica sets have 1 created replica
Jan 29 14:17:11.537: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Jan 29 14:17:11.549: INFO: Updating deployment test-rollover-deployment
Jan 29 14:17:11.549: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Jan 29 14:17:13.576: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Jan 29 14:17:13.587: INFO: Make sure deployment "test-rollover-deployment" is complete
Jan 29 14:17:13.595: INFO: all replica sets need to contain the pod-template-hash label
Jan 29 14:17:13.595: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715904229, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715904229, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715904232, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715904229, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 29 14:17:15.612: INFO: all replica sets need to contain the pod-template-hash label
Jan 29 14:17:15.612: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715904229, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715904229, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715904232, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715904229, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 29 14:17:17.610: INFO: all replica sets need to contain the pod-template-hash label
Jan 29 14:17:17.610: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715904229, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715904229, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715904232, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715904229, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 29 14:17:19.615: INFO: all replica sets need to contain the pod-template-hash label
Jan 29 14:17:19.615: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715904229, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715904229, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715904232, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715904229, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 29 14:17:21.611: INFO: all replica sets need to contain the pod-template-hash label
Jan 29 14:17:21.611: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715904229, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715904229, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715904239, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715904229, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 29 14:17:23.614: INFO: all replica sets need to contain the pod-template-hash label
Jan 29 14:17:23.614: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715904229, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715904229, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715904239, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715904229, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 29 14:17:25.612: INFO: all replica sets need to contain the pod-template-hash label
Jan 29 14:17:25.612: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715904229, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715904229, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715904239, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715904229, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 29 14:17:27.609: INFO: all replica sets need to contain the pod-template-hash label
Jan 29 14:17:27.609: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715904229, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715904229, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715904239, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715904229, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 29 14:17:29.613: INFO: all replica sets need to contain the pod-template-hash label
Jan 29 14:17:29.613: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715904229, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715904229, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715904239, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715904229, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 29 14:17:31.617: INFO: 
Jan 29 14:17:31.617: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Jan 29 14:17:31.632: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-3280,SelfLink:/apis/apps/v1/namespaces/deployment-3280/deployments/test-rollover-deployment,UID:fbf5fb0f-8a41-4f90-bf87-f16dc18e69bc,ResourceVersion:22322384,Generation:2,CreationTimestamp:2020-01-29 14:17:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-29 14:17:09 +0000 UTC 2020-01-29 14:17:09 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-29 14:17:30 +0000 UTC 2020-01-29 14:17:09 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Jan 29 14:17:31.638: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-3280,SelfLink:/apis/apps/v1/namespaces/deployment-3280/replicasets/test-rollover-deployment-854595fc44,UID:2da9e808-a13d-4975-b2ff-3d94d01959b6,ResourceVersion:22322373,Generation:2,CreationTimestamp:2020-01-29 14:17:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment fbf5fb0f-8a41-4f90-bf87-f16dc18e69bc 0xc002d75dc7 0xc002d75dc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jan 29 14:17:31.638: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Jan 29 14:17:31.638: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-3280,SelfLink:/apis/apps/v1/namespaces/deployment-3280/replicasets/test-rollover-controller,UID:edc689b4-3390-414d-8eda-a79cf9f396b9,ResourceVersion:22322383,Generation:2,CreationTimestamp:2020-01-29 14:17:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment fbf5fb0f-8a41-4f90-bf87-f16dc18e69bc 0xc002d75cf7 0xc002d75cf8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 29 14:17:31.638: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-3280,SelfLink:/apis/apps/v1/namespaces/deployment-3280/replicasets/test-rollover-deployment-9b8b997cf,UID:5a51ffa4-f4a6-43a5-af5c-3a1077eeb886,ResourceVersion:22322339,Generation:2,CreationTimestamp:2020-01-29 14:17:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment fbf5fb0f-8a41-4f90-bf87-f16dc18e69bc 0xc002d75e90 0xc002d75e91}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 29 14:17:31.643: INFO: Pod "test-rollover-deployment-854595fc44-h7g4t" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-h7g4t,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-3280,SelfLink:/api/v1/namespaces/deployment-3280/pods/test-rollover-deployment-854595fc44-h7g4t,UID:175adf4f-46d1-41b9-8ed3-8ca4b82faa25,ResourceVersion:22322356,Generation:0,CreationTimestamp:2020-01-29 14:17:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 2da9e808-a13d-4975-b2ff-3d94d01959b6 0xc002816857 0xc002816858}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dzrvr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dzrvr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-dzrvr true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0028168d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0028168f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 14:17:12 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 14:17:19 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 14:17:19 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 14:17:11 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-01-29 14:17:12 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-29 14:17:18 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://a7a93b1f2dfea60cf80a776460d1272cbde06793cfb85b26526dac5b26f38e87}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:17:31.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-3280" for this suite.
Jan 29 14:17:37.678: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:17:37.819: INFO: namespace deployment-3280 deletion completed in 6.16916068s

• [SLOW TEST:37.617 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:17:37.819: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-6229/configmap-test-b20c0266-923b-4e45-842b-9114765e1eb0
STEP: Creating a pod to test consume configMaps
Jan 29 14:17:38.068: INFO: Waiting up to 5m0s for pod "pod-configmaps-91ce9f89-f672-4caa-b262-e23176493d02" in namespace "configmap-6229" to be "success or failure"
Jan 29 14:17:38.143: INFO: Pod "pod-configmaps-91ce9f89-f672-4caa-b262-e23176493d02": Phase="Pending", Reason="", readiness=false. Elapsed: 74.681407ms
Jan 29 14:17:40.151: INFO: Pod "pod-configmaps-91ce9f89-f672-4caa-b262-e23176493d02": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082961468s
Jan 29 14:17:42.162: INFO: Pod "pod-configmaps-91ce9f89-f672-4caa-b262-e23176493d02": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094267107s
Jan 29 14:17:44.174: INFO: Pod "pod-configmaps-91ce9f89-f672-4caa-b262-e23176493d02": Phase="Pending", Reason="", readiness=false. Elapsed: 6.105654357s
Jan 29 14:17:46.182: INFO: Pod "pod-configmaps-91ce9f89-f672-4caa-b262-e23176493d02": Phase="Running", Reason="", readiness=true. Elapsed: 8.113987609s
Jan 29 14:17:48.190: INFO: Pod "pod-configmaps-91ce9f89-f672-4caa-b262-e23176493d02": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.122223509s
STEP: Saw pod success
Jan 29 14:17:48.190: INFO: Pod "pod-configmaps-91ce9f89-f672-4caa-b262-e23176493d02" satisfied condition "success or failure"
Jan 29 14:17:48.194: INFO: Trying to get logs from node iruya-node pod pod-configmaps-91ce9f89-f672-4caa-b262-e23176493d02 container env-test: 
STEP: delete the pod
Jan 29 14:17:48.260: INFO: Waiting for pod pod-configmaps-91ce9f89-f672-4caa-b262-e23176493d02 to disappear
Jan 29 14:17:48.267: INFO: Pod pod-configmaps-91ce9f89-f672-4caa-b262-e23176493d02 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:17:48.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6229" for this suite.
Jan 29 14:17:54.354: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:17:54.437: INFO: namespace configmap-6229 deletion completed in 6.114063409s

• [SLOW TEST:16.618 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:17:54.438: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-d8ece57d-d180-40f3-a314-301b45a4328d
STEP: Creating a pod to test consume configMaps
Jan 29 14:17:54.576: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-62d7b1a5-ee12-466e-a8e0-f33966eef9c3" in namespace "projected-3587" to be "success or failure"
Jan 29 14:17:54.599: INFO: Pod "pod-projected-configmaps-62d7b1a5-ee12-466e-a8e0-f33966eef9c3": Phase="Pending", Reason="", readiness=false. Elapsed: 23.08539ms
Jan 29 14:17:56.610: INFO: Pod "pod-projected-configmaps-62d7b1a5-ee12-466e-a8e0-f33966eef9c3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034147022s
Jan 29 14:17:59.076: INFO: Pod "pod-projected-configmaps-62d7b1a5-ee12-466e-a8e0-f33966eef9c3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.500407503s
Jan 29 14:18:01.118: INFO: Pod "pod-projected-configmaps-62d7b1a5-ee12-466e-a8e0-f33966eef9c3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.54209265s
Jan 29 14:18:03.126: INFO: Pod "pod-projected-configmaps-62d7b1a5-ee12-466e-a8e0-f33966eef9c3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.550396301s
STEP: Saw pod success
Jan 29 14:18:03.126: INFO: Pod "pod-projected-configmaps-62d7b1a5-ee12-466e-a8e0-f33966eef9c3" satisfied condition "success or failure"
Jan 29 14:18:03.135: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-62d7b1a5-ee12-466e-a8e0-f33966eef9c3 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 29 14:18:03.419: INFO: Waiting for pod pod-projected-configmaps-62d7b1a5-ee12-466e-a8e0-f33966eef9c3 to disappear
Jan 29 14:18:03.452: INFO: Pod pod-projected-configmaps-62d7b1a5-ee12-466e-a8e0-f33966eef9c3 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:18:03.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3587" for this suite.
Jan 29 14:18:09.588: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:18:09.710: INFO: namespace projected-3587 deletion completed in 6.247312096s

• [SLOW TEST:15.272 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:18:09.710: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Jan 29 14:18:09.836: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4887,SelfLink:/api/v1/namespaces/watch-4887/configmaps/e2e-watch-test-configmap-a,UID:1f0e12b3-c1e3-4d71-84a5-96abf89e8881,ResourceVersion:22322525,Generation:0,CreationTimestamp:2020-01-29 14:18:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 29 14:18:09.836: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4887,SelfLink:/api/v1/namespaces/watch-4887/configmaps/e2e-watch-test-configmap-a,UID:1f0e12b3-c1e3-4d71-84a5-96abf89e8881,ResourceVersion:22322525,Generation:0,CreationTimestamp:2020-01-29 14:18:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Jan 29 14:18:19.873: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4887,SelfLink:/api/v1/namespaces/watch-4887/configmaps/e2e-watch-test-configmap-a,UID:1f0e12b3-c1e3-4d71-84a5-96abf89e8881,ResourceVersion:22322539,Generation:0,CreationTimestamp:2020-01-29 14:18:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jan 29 14:18:19.874: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4887,SelfLink:/api/v1/namespaces/watch-4887/configmaps/e2e-watch-test-configmap-a,UID:1f0e12b3-c1e3-4d71-84a5-96abf89e8881,ResourceVersion:22322539,Generation:0,CreationTimestamp:2020-01-29 14:18:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Jan 29 14:18:29.908: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4887,SelfLink:/api/v1/namespaces/watch-4887/configmaps/e2e-watch-test-configmap-a,UID:1f0e12b3-c1e3-4d71-84a5-96abf89e8881,ResourceVersion:22322554,Generation:0,CreationTimestamp:2020-01-29 14:18:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 29 14:18:29.909: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4887,SelfLink:/api/v1/namespaces/watch-4887/configmaps/e2e-watch-test-configmap-a,UID:1f0e12b3-c1e3-4d71-84a5-96abf89e8881,ResourceVersion:22322554,Generation:0,CreationTimestamp:2020-01-29 14:18:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Jan 29 14:18:39.924: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4887,SelfLink:/api/v1/namespaces/watch-4887/configmaps/e2e-watch-test-configmap-a,UID:1f0e12b3-c1e3-4d71-84a5-96abf89e8881,ResourceVersion:22322568,Generation:0,CreationTimestamp:2020-01-29 14:18:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 29 14:18:39.924: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4887,SelfLink:/api/v1/namespaces/watch-4887/configmaps/e2e-watch-test-configmap-a,UID:1f0e12b3-c1e3-4d71-84a5-96abf89e8881,ResourceVersion:22322568,Generation:0,CreationTimestamp:2020-01-29 14:18:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Jan 29 14:18:49.943: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-4887,SelfLink:/api/v1/namespaces/watch-4887/configmaps/e2e-watch-test-configmap-b,UID:1f72f8be-5b80-49ed-b334-a391ac715bf6,ResourceVersion:22322582,Generation:0,CreationTimestamp:2020-01-29 14:18:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 29 14:18:49.943: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-4887,SelfLink:/api/v1/namespaces/watch-4887/configmaps/e2e-watch-test-configmap-b,UID:1f72f8be-5b80-49ed-b334-a391ac715bf6,ResourceVersion:22322582,Generation:0,CreationTimestamp:2020-01-29 14:18:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Jan 29 14:18:59.964: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-4887,SelfLink:/api/v1/namespaces/watch-4887/configmaps/e2e-watch-test-configmap-b,UID:1f72f8be-5b80-49ed-b334-a391ac715bf6,ResourceVersion:22322596,Generation:0,CreationTimestamp:2020-01-29 14:18:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 29 14:18:59.964: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-4887,SelfLink:/api/v1/namespaces/watch-4887/configmaps/e2e-watch-test-configmap-b,UID:1f72f8be-5b80-49ed-b334-a391ac715bf6,ResourceVersion:22322596,Generation:0,CreationTimestamp:2020-01-29 14:18:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:19:09.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-4887" for this suite.
Jan 29 14:19:16.027: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:19:16.124: INFO: namespace watch-4887 deletion completed in 6.152861442s

• [SLOW TEST:66.414 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:19:16.124: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name secret-emptykey-test-8669f2ab-934c-49b8-b0a5-6830817fd233
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:19:16.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5540" for this suite.
Jan 29 14:19:22.337: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:19:22.449: INFO: namespace secrets-5540 deletion completed in 6.181954561s

• [SLOW TEST:6.325 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:19:22.450: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-f1fbd254-0561-4d22-a84a-ef5389573417
STEP: Creating a pod to test consume configMaps
Jan 29 14:19:22.585: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-06fe010f-a096-4d11-b72a-1bfdb71ab216" in namespace "projected-4509" to be "success or failure"
Jan 29 14:19:22.615: INFO: Pod "pod-projected-configmaps-06fe010f-a096-4d11-b72a-1bfdb71ab216": Phase="Pending", Reason="", readiness=false. Elapsed: 29.181941ms
Jan 29 14:19:24.644: INFO: Pod "pod-projected-configmaps-06fe010f-a096-4d11-b72a-1bfdb71ab216": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058309944s
Jan 29 14:19:26.651: INFO: Pod "pod-projected-configmaps-06fe010f-a096-4d11-b72a-1bfdb71ab216": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065541237s
Jan 29 14:19:28.675: INFO: Pod "pod-projected-configmaps-06fe010f-a096-4d11-b72a-1bfdb71ab216": Phase="Pending", Reason="", readiness=false. Elapsed: 6.08941824s
Jan 29 14:19:30.691: INFO: Pod "pod-projected-configmaps-06fe010f-a096-4d11-b72a-1bfdb71ab216": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.105121218s
STEP: Saw pod success
Jan 29 14:19:30.691: INFO: Pod "pod-projected-configmaps-06fe010f-a096-4d11-b72a-1bfdb71ab216" satisfied condition "success or failure"
Jan 29 14:19:30.702: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-06fe010f-a096-4d11-b72a-1bfdb71ab216 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 29 14:19:31.005: INFO: Waiting for pod pod-projected-configmaps-06fe010f-a096-4d11-b72a-1bfdb71ab216 to disappear
Jan 29 14:19:31.014: INFO: Pod pod-projected-configmaps-06fe010f-a096-4d11-b72a-1bfdb71ab216 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:19:31.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4509" for this suite.
Jan 29 14:19:37.050: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:19:37.180: INFO: namespace projected-4509 deletion completed in 6.160542314s

• [SLOW TEST:14.730 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:19:37.181: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan 29 14:19:37.340: INFO: Waiting up to 5m0s for pod "pod-f1a0584c-512a-4d44-a373-5aee6c3f9ce6" in namespace "emptydir-1665" to be "success or failure"
Jan 29 14:19:37.350: INFO: Pod "pod-f1a0584c-512a-4d44-a373-5aee6c3f9ce6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.513731ms
Jan 29 14:19:39.359: INFO: Pod "pod-f1a0584c-512a-4d44-a373-5aee6c3f9ce6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019863432s
Jan 29 14:19:41.381: INFO: Pod "pod-f1a0584c-512a-4d44-a373-5aee6c3f9ce6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041045628s
Jan 29 14:19:43.616: INFO: Pod "pod-f1a0584c-512a-4d44-a373-5aee6c3f9ce6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.276144336s
Jan 29 14:19:45.625: INFO: Pod "pod-f1a0584c-512a-4d44-a373-5aee6c3f9ce6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.284999138s
STEP: Saw pod success
Jan 29 14:19:45.625: INFO: Pod "pod-f1a0584c-512a-4d44-a373-5aee6c3f9ce6" satisfied condition "success or failure"
Jan 29 14:19:45.667: INFO: Trying to get logs from node iruya-node pod pod-f1a0584c-512a-4d44-a373-5aee6c3f9ce6 container test-container: 
STEP: delete the pod
Jan 29 14:19:45.749: INFO: Waiting for pod pod-f1a0584c-512a-4d44-a373-5aee6c3f9ce6 to disappear
Jan 29 14:19:45.824: INFO: Pod pod-f1a0584c-512a-4d44-a373-5aee6c3f9ce6 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:19:45.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1665" for this suite.
Jan 29 14:19:51.894: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:19:52.044: INFO: namespace emptydir-1665 deletion completed in 6.204175327s

• [SLOW TEST:14.863 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:19:52.044: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:19:57.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-2824" for this suite.
Jan 29 14:20:03.656: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:20:03.817: INFO: namespace watch-2824 deletion completed in 6.311152266s

• [SLOW TEST:11.773 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:20:03.818: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 29 14:20:04.014: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f06ece9d-b620-4f0e-a742-bfff98e43f47" in namespace "projected-1329" to be "success or failure"
Jan 29 14:20:04.135: INFO: Pod "downwardapi-volume-f06ece9d-b620-4f0e-a742-bfff98e43f47": Phase="Pending", Reason="", readiness=false. Elapsed: 121.371712ms
Jan 29 14:20:06.148: INFO: Pod "downwardapi-volume-f06ece9d-b620-4f0e-a742-bfff98e43f47": Phase="Pending", Reason="", readiness=false. Elapsed: 2.134000144s
Jan 29 14:20:08.157: INFO: Pod "downwardapi-volume-f06ece9d-b620-4f0e-a742-bfff98e43f47": Phase="Pending", Reason="", readiness=false. Elapsed: 4.143242574s
Jan 29 14:20:10.165: INFO: Pod "downwardapi-volume-f06ece9d-b620-4f0e-a742-bfff98e43f47": Phase="Pending", Reason="", readiness=false. Elapsed: 6.150938624s
Jan 29 14:20:12.199: INFO: Pod "downwardapi-volume-f06ece9d-b620-4f0e-a742-bfff98e43f47": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.185294876s
STEP: Saw pod success
Jan 29 14:20:12.199: INFO: Pod "downwardapi-volume-f06ece9d-b620-4f0e-a742-bfff98e43f47" satisfied condition "success or failure"
Jan 29 14:20:12.207: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-f06ece9d-b620-4f0e-a742-bfff98e43f47 container client-container: 
STEP: delete the pod
Jan 29 14:20:12.338: INFO: Waiting for pod downwardapi-volume-f06ece9d-b620-4f0e-a742-bfff98e43f47 to disappear
Jan 29 14:20:12.346: INFO: Pod downwardapi-volume-f06ece9d-b620-4f0e-a742-bfff98e43f47 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:20:12.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1329" for this suite.
Jan 29 14:20:18.372: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:20:18.491: INFO: namespace projected-1329 deletion completed in 6.140404944s

• [SLOW TEST:14.674 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:20:18.492: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 29 14:20:18.631: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e28ae8b2-4e8b-4bc6-a0a4-07f9bc1394a7" in namespace "downward-api-6147" to be "success or failure"
Jan 29 14:20:18.636: INFO: Pod "downwardapi-volume-e28ae8b2-4e8b-4bc6-a0a4-07f9bc1394a7": Phase="Pending", Reason="", readiness=false. Elapsed: 5.083201ms
Jan 29 14:20:20.651: INFO: Pod "downwardapi-volume-e28ae8b2-4e8b-4bc6-a0a4-07f9bc1394a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020082955s
Jan 29 14:20:22.664: INFO: Pod "downwardapi-volume-e28ae8b2-4e8b-4bc6-a0a4-07f9bc1394a7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03289462s
Jan 29 14:20:24.670: INFO: Pod "downwardapi-volume-e28ae8b2-4e8b-4bc6-a0a4-07f9bc1394a7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039427639s
Jan 29 14:20:26.682: INFO: Pod "downwardapi-volume-e28ae8b2-4e8b-4bc6-a0a4-07f9bc1394a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.051054442s
STEP: Saw pod success
Jan 29 14:20:26.682: INFO: Pod "downwardapi-volume-e28ae8b2-4e8b-4bc6-a0a4-07f9bc1394a7" satisfied condition "success or failure"
Jan 29 14:20:26.687: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-e28ae8b2-4e8b-4bc6-a0a4-07f9bc1394a7 container client-container: 
STEP: delete the pod
Jan 29 14:20:26.786: INFO: Waiting for pod downwardapi-volume-e28ae8b2-4e8b-4bc6-a0a4-07f9bc1394a7 to disappear
Jan 29 14:20:26.795: INFO: Pod downwardapi-volume-e28ae8b2-4e8b-4bc6-a0a4-07f9bc1394a7 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:20:26.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6147" for this suite.
Jan 29 14:20:32.896: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:20:33.025: INFO: namespace downward-api-6147 deletion completed in 6.220566518s

• [SLOW TEST:14.534 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:20:33.026: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan 29 14:20:33.091: INFO: Waiting up to 5m0s for pod "pod-75449044-e081-4b17-9fe2-8ac8bc7f551a" in namespace "emptydir-6971" to be "success or failure"
Jan 29 14:20:33.097: INFO: Pod "pod-75449044-e081-4b17-9fe2-8ac8bc7f551a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.178106ms
Jan 29 14:20:35.104: INFO: Pod "pod-75449044-e081-4b17-9fe2-8ac8bc7f551a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013513234s
Jan 29 14:20:37.110: INFO: Pod "pod-75449044-e081-4b17-9fe2-8ac8bc7f551a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019575698s
Jan 29 14:20:39.116: INFO: Pod "pod-75449044-e081-4b17-9fe2-8ac8bc7f551a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.025645424s
Jan 29 14:20:41.169: INFO: Pod "pod-75449044-e081-4b17-9fe2-8ac8bc7f551a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.077877316s
STEP: Saw pod success
Jan 29 14:20:41.169: INFO: Pod "pod-75449044-e081-4b17-9fe2-8ac8bc7f551a" satisfied condition "success or failure"
Jan 29 14:20:41.178: INFO: Trying to get logs from node iruya-node pod pod-75449044-e081-4b17-9fe2-8ac8bc7f551a container test-container: 
STEP: delete the pod
Jan 29 14:20:41.248: INFO: Waiting for pod pod-75449044-e081-4b17-9fe2-8ac8bc7f551a to disappear
Jan 29 14:20:41.349: INFO: Pod pod-75449044-e081-4b17-9fe2-8ac8bc7f551a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:20:41.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6971" for this suite.
Jan 29 14:20:47.424: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:20:47.563: INFO: namespace emptydir-6971 deletion completed in 6.20066804s

• [SLOW TEST:14.537 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:20:47.563: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan 29 14:20:56.368: INFO: Successfully updated pod "pod-update-activedeadlineseconds-193defb0-1c7f-49ad-8497-9f8e3f55f474"
Jan 29 14:20:56.368: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-193defb0-1c7f-49ad-8497-9f8e3f55f474" in namespace "pods-8091" to be "terminated due to deadline exceeded"
Jan 29 14:20:56.390: INFO: Pod "pod-update-activedeadlineseconds-193defb0-1c7f-49ad-8497-9f8e3f55f474": Phase="Running", Reason="", readiness=true. Elapsed: 21.490127ms
Jan 29 14:20:58.401: INFO: Pod "pod-update-activedeadlineseconds-193defb0-1c7f-49ad-8497-9f8e3f55f474": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.032260806s
Jan 29 14:20:58.401: INFO: Pod "pod-update-activedeadlineseconds-193defb0-1c7f-49ad-8497-9f8e3f55f474" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:20:58.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8091" for this suite.
Jan 29 14:21:04.459: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:21:04.598: INFO: namespace pods-8091 deletion completed in 6.189713958s

• [SLOW TEST:17.035 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:21:04.599: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan 29 14:21:04.662: INFO: Waiting up to 5m0s for pod "pod-1fb1b8d3-dd4b-4137-b788-3d519cfbf914" in namespace "emptydir-6837" to be "success or failure"
Jan 29 14:21:04.668: INFO: Pod "pod-1fb1b8d3-dd4b-4137-b788-3d519cfbf914": Phase="Pending", Reason="", readiness=false. Elapsed: 5.992551ms
Jan 29 14:21:06.680: INFO: Pod "pod-1fb1b8d3-dd4b-4137-b788-3d519cfbf914": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017448329s
Jan 29 14:21:08.722: INFO: Pod "pod-1fb1b8d3-dd4b-4137-b788-3d519cfbf914": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060019253s
Jan 29 14:21:10.747: INFO: Pod "pod-1fb1b8d3-dd4b-4137-b788-3d519cfbf914": Phase="Pending", Reason="", readiness=false. Elapsed: 6.0844689s
Jan 29 14:21:12.752: INFO: Pod "pod-1fb1b8d3-dd4b-4137-b788-3d519cfbf914": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.090005279s
STEP: Saw pod success
Jan 29 14:21:12.752: INFO: Pod "pod-1fb1b8d3-dd4b-4137-b788-3d519cfbf914" satisfied condition "success or failure"
Jan 29 14:21:12.755: INFO: Trying to get logs from node iruya-node pod pod-1fb1b8d3-dd4b-4137-b788-3d519cfbf914 container test-container: 
STEP: delete the pod
Jan 29 14:21:12.814: INFO: Waiting for pod pod-1fb1b8d3-dd4b-4137-b788-3d519cfbf914 to disappear
Jan 29 14:21:12.833: INFO: Pod pod-1fb1b8d3-dd4b-4137-b788-3d519cfbf914 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:21:12.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6837" for this suite.
Jan 29 14:21:18.910: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:21:19.111: INFO: namespace emptydir-6837 deletion completed in 6.269377236s

• [SLOW TEST:14.513 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:21:19.112: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0129 14:22:00.734815       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 29 14:22:00.734: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:22:00.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1973" for this suite.
Jan 29 14:22:18.767: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:22:18.919: INFO: namespace gc-1973 deletion completed in 18.180362314s

• [SLOW TEST:59.808 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:22:18.920: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 29 14:22:19.094: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Jan 29 14:22:24.106: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan 29 14:22:28.210: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Jan 29 14:22:28.328: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-3900,SelfLink:/apis/apps/v1/namespaces/deployment-3900/deployments/test-cleanup-deployment,UID:26272143-579c-4daf-b2b7-ead29b242524,ResourceVersion:22323371,Generation:1,CreationTimestamp:2020-01-29 14:22:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Jan 29 14:22:28.389: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-3900,SelfLink:/apis/apps/v1/namespaces/deployment-3900/replicasets/test-cleanup-deployment-55bbcbc84c,UID:4653b256-94fc-4d5b-bc34-fe094f4d4893,ResourceVersion:22323374,Generation:1,CreationTimestamp:2020-01-29 14:22:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 26272143-579c-4daf-b2b7-ead29b242524 0xc002f09797 0xc002f09798}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 29 14:22:28.389: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Jan 29 14:22:28.390: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-3900,SelfLink:/apis/apps/v1/namespaces/deployment-3900/replicasets/test-cleanup-controller,UID:398b0073-8a35-477e-ab7b-a629e2255445,ResourceVersion:22323373,Generation:1,CreationTimestamp:2020-01-29 14:22:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 26272143-579c-4daf-b2b7-ead29b242524 0xc002f096c7 0xc002f096c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jan 29 14:22:28.401: INFO: Pod "test-cleanup-controller-j7q5w" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-j7q5w,GenerateName:test-cleanup-controller-,Namespace:deployment-3900,SelfLink:/api/v1/namespaces/deployment-3900/pods/test-cleanup-controller-j7q5w,UID:9c3de895-9a82-41ed-9bf2-dac3ffb17175,ResourceVersion:22323368,Generation:0,CreationTimestamp:2020-01-29 14:22:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 398b0073-8a35-477e-ab7b-a629e2255445 0xc002816067 0xc002816068}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6wgn8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6wgn8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6wgn8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0028160e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002816100}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 14:22:19 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 14:22:27 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 14:22:27 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 14:22:19 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-01-29 14:22:19 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-29 14:22:26 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://f1aa6c611039e7cee6c263d4a0d0e7d32f7f73b6a656aba549e7f59b6be6af9a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 29 14:22:28.401: INFO: Pod "test-cleanup-deployment-55bbcbc84c-gnp59" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-gnp59,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-3900,SelfLink:/api/v1/namespaces/deployment-3900/pods/test-cleanup-deployment-55bbcbc84c-gnp59,UID:c2e1e112-3486-4cac-84ce-bb9266e56120,ResourceVersion:22323375,Generation:0,CreationTimestamp:2020-01-29 14:22:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c 4653b256-94fc-4d5b-bc34-fe094f4d4893 0xc0028161e7 0xc0028161e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6wgn8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6wgn8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-6wgn8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002816250} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002816270}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:22:28.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-3900" for this suite.
Jan 29 14:22:36.599: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:22:36.739: INFO: namespace deployment-3900 deletion completed in 8.222952912s

• [SLOW TEST:17.820 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:22:36.740: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Jan 29 14:22:45.625: INFO: Successfully updated pod "annotationupdate87e91a0a-77c1-4fd6-97be-9b4a0b0c654e"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:22:48.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5092" for this suite.
Jan 29 14:23:10.214: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:23:10.380: INFO: namespace projected-5092 deletion completed in 22.191332471s

• [SLOW TEST:33.640 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:23:10.381: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's args
Jan 29 14:23:10.534: INFO: Waiting up to 5m0s for pod "var-expansion-02251b4c-9e85-4184-851d-851575173ead" in namespace "var-expansion-7128" to be "success or failure"
Jan 29 14:23:10.542: INFO: Pod "var-expansion-02251b4c-9e85-4184-851d-851575173ead": Phase="Pending", Reason="", readiness=false. Elapsed: 7.979619ms
Jan 29 14:23:12.556: INFO: Pod "var-expansion-02251b4c-9e85-4184-851d-851575173ead": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021810454s
Jan 29 14:23:14.564: INFO: Pod "var-expansion-02251b4c-9e85-4184-851d-851575173ead": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029737345s
Jan 29 14:23:16.576: INFO: Pod "var-expansion-02251b4c-9e85-4184-851d-851575173ead": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041765439s
Jan 29 14:23:18.591: INFO: Pod "var-expansion-02251b4c-9e85-4184-851d-851575173ead": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.057518576s
STEP: Saw pod success
Jan 29 14:23:18.592: INFO: Pod "var-expansion-02251b4c-9e85-4184-851d-851575173ead" satisfied condition "success or failure"
Jan 29 14:23:18.598: INFO: Trying to get logs from node iruya-node pod var-expansion-02251b4c-9e85-4184-851d-851575173ead container dapi-container: 
STEP: delete the pod
Jan 29 14:23:18.716: INFO: Waiting for pod var-expansion-02251b4c-9e85-4184-851d-851575173ead to disappear
Jan 29 14:23:18.722: INFO: Pod var-expansion-02251b4c-9e85-4184-851d-851575173ead no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:23:18.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-7128" for this suite.
Jan 29 14:23:24.748: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:23:24.939: INFO: namespace var-expansion-7128 deletion completed in 6.211337411s

• [SLOW TEST:14.558 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:23:24.940: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 29 14:23:25.103: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:23:33.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9789" for this suite.
Jan 29 14:24:19.340: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:24:19.483: INFO: namespace pods-9789 deletion completed in 46.183320145s

• [SLOW TEST:54.543 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:24:19.484: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jan 29 14:24:19.628: INFO: Waiting up to 5m0s for pod "downward-api-50f3e703-85be-4fff-b346-0fafbd044238" in namespace "downward-api-7928" to be "success or failure"
Jan 29 14:24:19.647: INFO: Pod "downward-api-50f3e703-85be-4fff-b346-0fafbd044238": Phase="Pending", Reason="", readiness=false. Elapsed: 19.160322ms
Jan 29 14:24:21.655: INFO: Pod "downward-api-50f3e703-85be-4fff-b346-0fafbd044238": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02675944s
Jan 29 14:24:23.689: INFO: Pod "downward-api-50f3e703-85be-4fff-b346-0fafbd044238": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061445966s
Jan 29 14:24:25.721: INFO: Pod "downward-api-50f3e703-85be-4fff-b346-0fafbd044238": Phase="Pending", Reason="", readiness=false. Elapsed: 6.09353856s
Jan 29 14:24:27.730: INFO: Pod "downward-api-50f3e703-85be-4fff-b346-0fafbd044238": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.10166286s
STEP: Saw pod success
Jan 29 14:24:27.730: INFO: Pod "downward-api-50f3e703-85be-4fff-b346-0fafbd044238" satisfied condition "success or failure"
Jan 29 14:24:27.733: INFO: Trying to get logs from node iruya-node pod downward-api-50f3e703-85be-4fff-b346-0fafbd044238 container dapi-container: 
STEP: delete the pod
Jan 29 14:24:27.886: INFO: Waiting for pod downward-api-50f3e703-85be-4fff-b346-0fafbd044238 to disappear
Jan 29 14:24:27.899: INFO: Pod downward-api-50f3e703-85be-4fff-b346-0fafbd044238 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:24:27.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7928" for this suite.
Jan 29 14:24:33.942: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:24:34.053: INFO: namespace downward-api-7928 deletion completed in 6.143021168s

• [SLOW TEST:14.569 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:24:34.053: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-d12401ca-8965-44e7-96a5-fc1ea355c138
STEP: Creating a pod to test consume configMaps
Jan 29 14:24:34.193: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9ef9de12-7e22-4021-8fd2-77ffa7e53bd5" in namespace "projected-1447" to be "success or failure"
Jan 29 14:24:34.209: INFO: Pod "pod-projected-configmaps-9ef9de12-7e22-4021-8fd2-77ffa7e53bd5": Phase="Pending", Reason="", readiness=false. Elapsed: 15.264501ms
Jan 29 14:24:36.219: INFO: Pod "pod-projected-configmaps-9ef9de12-7e22-4021-8fd2-77ffa7e53bd5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025227846s
Jan 29 14:24:38.227: INFO: Pod "pod-projected-configmaps-9ef9de12-7e22-4021-8fd2-77ffa7e53bd5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034125102s
Jan 29 14:24:40.235: INFO: Pod "pod-projected-configmaps-9ef9de12-7e22-4021-8fd2-77ffa7e53bd5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041722147s
Jan 29 14:24:42.252: INFO: Pod "pod-projected-configmaps-9ef9de12-7e22-4021-8fd2-77ffa7e53bd5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.058507928s
STEP: Saw pod success
Jan 29 14:24:42.252: INFO: Pod "pod-projected-configmaps-9ef9de12-7e22-4021-8fd2-77ffa7e53bd5" satisfied condition "success or failure"
Jan 29 14:24:42.256: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-9ef9de12-7e22-4021-8fd2-77ffa7e53bd5 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 29 14:24:42.357: INFO: Waiting for pod pod-projected-configmaps-9ef9de12-7e22-4021-8fd2-77ffa7e53bd5 to disappear
Jan 29 14:24:42.362: INFO: Pod pod-projected-configmaps-9ef9de12-7e22-4021-8fd2-77ffa7e53bd5 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:24:42.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1447" for this suite.
Jan 29 14:24:48.383: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:24:48.538: INFO: namespace projected-1447 deletion completed in 6.172095821s

• [SLOW TEST:14.485 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:24:48.539: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan 29 14:25:04.809: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 29 14:25:04.830: INFO: Pod pod-with-poststart-http-hook still exists
Jan 29 14:25:06.830: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 29 14:25:06.836: INFO: Pod pod-with-poststart-http-hook still exists
Jan 29 14:25:08.830: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 29 14:25:08.836: INFO: Pod pod-with-poststart-http-hook still exists
Jan 29 14:25:10.830: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 29 14:25:10.836: INFO: Pod pod-with-poststart-http-hook still exists
Jan 29 14:25:12.830: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 29 14:25:12.835: INFO: Pod pod-with-poststart-http-hook still exists
Jan 29 14:25:14.830: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 29 14:25:14.840: INFO: Pod pod-with-poststart-http-hook still exists
Jan 29 14:25:16.830: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 29 14:25:16.837: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:25:16.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-7653" for this suite.
Jan 29 14:25:40.877: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:25:41.024: INFO: namespace container-lifecycle-hook-7653 deletion completed in 24.181627588s

• [SLOW TEST:52.486 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:25:41.025: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-5634e984-c56f-4dd1-9ad1-a1a2ebbfcf1e
STEP: Creating a pod to test consume configMaps
Jan 29 14:25:41.126: INFO: Waiting up to 5m0s for pod "pod-configmaps-f186402c-7108-433c-a10e-7a3b7a7dffdb" in namespace "configmap-2628" to be "success or failure"
Jan 29 14:25:41.135: INFO: Pod "pod-configmaps-f186402c-7108-433c-a10e-7a3b7a7dffdb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.635445ms
Jan 29 14:25:43.147: INFO: Pod "pod-configmaps-f186402c-7108-433c-a10e-7a3b7a7dffdb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021442412s
Jan 29 14:25:45.162: INFO: Pod "pod-configmaps-f186402c-7108-433c-a10e-7a3b7a7dffdb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036432944s
Jan 29 14:25:47.177: INFO: Pod "pod-configmaps-f186402c-7108-433c-a10e-7a3b7a7dffdb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051029857s
Jan 29 14:25:49.187: INFO: Pod "pod-configmaps-f186402c-7108-433c-a10e-7a3b7a7dffdb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.060811025s
STEP: Saw pod success
Jan 29 14:25:49.187: INFO: Pod "pod-configmaps-f186402c-7108-433c-a10e-7a3b7a7dffdb" satisfied condition "success or failure"
Jan 29 14:25:49.193: INFO: Trying to get logs from node iruya-node pod pod-configmaps-f186402c-7108-433c-a10e-7a3b7a7dffdb container configmap-volume-test: 
STEP: delete the pod
Jan 29 14:25:49.289: INFO: Waiting for pod pod-configmaps-f186402c-7108-433c-a10e-7a3b7a7dffdb to disappear
Jan 29 14:25:49.299: INFO: Pod pod-configmaps-f186402c-7108-433c-a10e-7a3b7a7dffdb no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:25:49.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2628" for this suite.
Jan 29 14:25:55.434: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:25:55.616: INFO: namespace configmap-2628 deletion completed in 6.31053964s

• [SLOW TEST:14.591 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:25:55.616: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 29 14:25:55.705: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Jan 29 14:25:55.782: INFO: Pod name sample-pod: Found 0 pods out of 1
Jan 29 14:26:00.791: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan 29 14:26:02.808: INFO: Creating deployment "test-rolling-update-deployment"
Jan 29 14:26:02.831: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Jan 29 14:26:03.217: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Jan 29 14:26:05.266: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Jan 29 14:26:05.270: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715904763, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715904763, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715904763, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715904763, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 29 14:26:07.282: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715904763, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715904763, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715904763, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715904763, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 29 14:26:09.288: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715904763, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715904763, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715904763, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715904763, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 29 14:26:11.277: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715904763, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715904763, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715904763, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715904763, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 29 14:26:13.277: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Jan 29 14:26:13.289: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-9583,SelfLink:/apis/apps/v1/namespaces/deployment-9583/deployments/test-rolling-update-deployment,UID:a4c56ebb-9f14-4198-abfe-f822a827eb49,ResourceVersion:22323934,Generation:1,CreationTimestamp:2020-01-29 14:26:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-29 14:26:03 +0000 UTC 2020-01-29 14:26:03 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-29 14:26:11 +0000 UTC 2020-01-29 14:26:03 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Jan 29 14:26:13.292: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-9583,SelfLink:/apis/apps/v1/namespaces/deployment-9583/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:eb5db240-7721-42a0-ba5c-e5faf6e341c4,ResourceVersion:22323922,Generation:1,CreationTimestamp:2020-01-29 14:26:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment a4c56ebb-9f14-4198-abfe-f822a827eb49 0xc002817097 0xc002817098}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jan 29 14:26:13.292: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Jan 29 14:26:13.292: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-9583,SelfLink:/apis/apps/v1/namespaces/deployment-9583/replicasets/test-rolling-update-controller,UID:1fac7935-821f-48bc-96cd-b1e9eda15ab3,ResourceVersion:22323933,Generation:2,CreationTimestamp:2020-01-29 14:25:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment a4c56ebb-9f14-4198-abfe-f822a827eb49 0xc002816fb7 0xc002816fb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 29 14:26:13.300: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-p78km" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-p78km,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-9583,SelfLink:/api/v1/namespaces/deployment-9583/pods/test-rolling-update-deployment-79f6b9d75c-p78km,UID:02d02043-7716-4fba-aa9a-f8107718bf1f,ResourceVersion:22323921,Generation:0,CreationTimestamp:2020-01-29 14:26:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c eb5db240-7721-42a0-ba5c-e5faf6e341c4 0xc002986b27 0xc002986b28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dk777 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dk777,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-dk777 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002986ba0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002986bc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 14:26:03 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 14:26:11 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 14:26:11 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 14:26:03 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-01-29 14:26:03 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-29 14:26:10 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://4783e7beb97fd89ce79811489e7d52f9303ffded9232d30138d679d2f0c448a9}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:26:13.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-9583" for this suite.
Jan 29 14:26:19.334: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:26:19.478: INFO: namespace deployment-9583 deletion completed in 6.169888938s

• [SLOW TEST:23.862 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:26:19.481: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 29 14:26:53.695: INFO: Container started at 2020-01-29 14:26:27 +0000 UTC, pod became ready at 2020-01-29 14:26:51 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:26:53.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9120" for this suite.
Jan 29 14:27:15.754: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:27:15.923: INFO: namespace container-probe-9120 deletion completed in 22.215649189s

• [SLOW TEST:56.443 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:27:15.924: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 29 14:27:16.055: INFO: Waiting up to 5m0s for pod "downwardapi-volume-af93cb8c-780d-47f6-a3b8-70f9a1a8bf23" in namespace "downward-api-9834" to be "success or failure"
Jan 29 14:27:16.064: INFO: Pod "downwardapi-volume-af93cb8c-780d-47f6-a3b8-70f9a1a8bf23": Phase="Pending", Reason="", readiness=false. Elapsed: 8.944054ms
Jan 29 14:27:18.072: INFO: Pod "downwardapi-volume-af93cb8c-780d-47f6-a3b8-70f9a1a8bf23": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017090088s
Jan 29 14:27:20.082: INFO: Pod "downwardapi-volume-af93cb8c-780d-47f6-a3b8-70f9a1a8bf23": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02705717s
Jan 29 14:27:22.127: INFO: Pod "downwardapi-volume-af93cb8c-780d-47f6-a3b8-70f9a1a8bf23": Phase="Pending", Reason="", readiness=false. Elapsed: 6.071404918s
Jan 29 14:27:24.141: INFO: Pod "downwardapi-volume-af93cb8c-780d-47f6-a3b8-70f9a1a8bf23": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.085221322s
STEP: Saw pod success
Jan 29 14:27:24.141: INFO: Pod "downwardapi-volume-af93cb8c-780d-47f6-a3b8-70f9a1a8bf23" satisfied condition "success or failure"
Jan 29 14:27:24.144: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-af93cb8c-780d-47f6-a3b8-70f9a1a8bf23 container client-container: 
STEP: delete the pod
Jan 29 14:27:24.374: INFO: Waiting for pod downwardapi-volume-af93cb8c-780d-47f6-a3b8-70f9a1a8bf23 to disappear
Jan 29 14:27:24.382: INFO: Pod downwardapi-volume-af93cb8c-780d-47f6-a3b8-70f9a1a8bf23 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:27:24.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9834" for this suite.
Jan 29 14:27:30.413: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:27:30.544: INFO: namespace downward-api-9834 deletion completed in 6.15629349s

• [SLOW TEST:14.620 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:27:30.545: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Jan 29 14:27:30.681: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 29 14:27:30.693: INFO: Waiting for terminating namespaces to be deleted...
Jan 29 14:27:30.698: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Jan 29 14:27:30.721: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Jan 29 14:27:30.722: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 29 14:27:30.722: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Jan 29 14:27:30.722: INFO: 	Container weave ready: true, restart count 0
Jan 29 14:27:30.722: INFO: 	Container weave-npc ready: true, restart count 0
Jan 29 14:27:30.722: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Jan 29 14:27:30.740: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Jan 29 14:27:30.740: INFO: 	Container kube-apiserver ready: true, restart count 0
Jan 29 14:27:30.740: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Jan 29 14:27:30.740: INFO: 	Container kube-scheduler ready: true, restart count 13
Jan 29 14:27:30.740: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan 29 14:27:30.740: INFO: 	Container coredns ready: true, restart count 0
Jan 29 14:27:30.740: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Jan 29 14:27:30.740: INFO: 	Container etcd ready: true, restart count 0
Jan 29 14:27:30.740: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Jan 29 14:27:30.740: INFO: 	Container weave ready: true, restart count 0
Jan 29 14:27:30.740: INFO: 	Container weave-npc ready: true, restart count 0
Jan 29 14:27:30.740: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan 29 14:27:30.740: INFO: 	Container coredns ready: true, restart count 0
Jan 29 14:27:30.740: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Jan 29 14:27:30.740: INFO: 	Container kube-controller-manager ready: true, restart count 19
Jan 29 14:27:30.740: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Jan 29 14:27:30.740: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: verifying the node has the label node iruya-node
STEP: verifying the node has the label node iruya-server-sfge57q7djm7
Jan 29 14:27:30.839: INFO: Pod coredns-5c98db65d4-bm4gs requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Jan 29 14:27:30.839: INFO: Pod coredns-5c98db65d4-xx8w8 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Jan 29 14:27:30.839: INFO: Pod etcd-iruya-server-sfge57q7djm7 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7
Jan 29 14:27:30.839: INFO: Pod kube-apiserver-iruya-server-sfge57q7djm7 requesting resource cpu=250m on Node iruya-server-sfge57q7djm7
Jan 29 14:27:30.839: INFO: Pod kube-controller-manager-iruya-server-sfge57q7djm7 requesting resource cpu=200m on Node iruya-server-sfge57q7djm7
Jan 29 14:27:30.839: INFO: Pod kube-proxy-58v95 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7
Jan 29 14:27:30.839: INFO: Pod kube-proxy-976zl requesting resource cpu=0m on Node iruya-node
Jan 29 14:27:30.839: INFO: Pod kube-scheduler-iruya-server-sfge57q7djm7 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Jan 29 14:27:30.839: INFO: Pod weave-net-bzl4d requesting resource cpu=20m on Node iruya-server-sfge57q7djm7
Jan 29 14:27:30.839: INFO: Pod weave-net-rlp57 requesting resource cpu=20m on Node iruya-node
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-8093b1d0-29a4-41d2-82f6-fae585123106.15ee61ccac6567d0], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9636/filler-pod-8093b1d0-29a4-41d2-82f6-fae585123106 to iruya-node]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-8093b1d0-29a4-41d2-82f6-fae585123106.15ee61cddce6aca6], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-8093b1d0-29a4-41d2-82f6-fae585123106.15ee61ce93aaa188], Reason = [Created], Message = [Created container filler-pod-8093b1d0-29a4-41d2-82f6-fae585123106]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-8093b1d0-29a4-41d2-82f6-fae585123106.15ee61ceb505df55], Reason = [Started], Message = [Started container filler-pod-8093b1d0-29a4-41d2-82f6-fae585123106]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-be24e2cc-4b85-444e-88ca-4cab306953c8.15ee61ccabd8de72], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9636/filler-pod-be24e2cc-4b85-444e-88ca-4cab306953c8 to iruya-server-sfge57q7djm7]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-be24e2cc-4b85-444e-88ca-4cab306953c8.15ee61cde75c7116], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-be24e2cc-4b85-444e-88ca-4cab306953c8.15ee61ce9af55f74], Reason = [Created], Message = [Created container filler-pod-be24e2cc-4b85-444e-88ca-4cab306953c8]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-be24e2cc-4b85-444e-88ca-4cab306953c8.15ee61ceb69a19b2], Reason = [Started], Message = [Started container filler-pod-be24e2cc-4b85-444e-88ca-4cab306953c8]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15ee61cf02894a97], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.]
STEP: removing the label node off the node iruya-node
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node iruya-server-sfge57q7djm7
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:27:42.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-9636" for this suite.
Jan 29 14:27:49.187: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:27:49.352: INFO: namespace sched-pred-9636 deletion completed in 7.259183214s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:18.808 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:27:49.353: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Jan 29 14:27:49.600: INFO: PodSpec: initContainers in spec.initContainers
Jan 29 14:28:53.819: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-f6608025-87d4-495b-9546-bff08747d7a5", GenerateName:"", Namespace:"init-container-5443", SelfLink:"/api/v1/namespaces/init-container-5443/pods/pod-init-f6608025-87d4-495b-9546-bff08747d7a5", UID:"e64afb38-7bbf-4765-a86a-25a38ab51e51", ResourceVersion:"22324297", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63715904869, loc:(*time.Location)(0x7ea48a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"600560099"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-7dnp6", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002026480), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-7dnp6", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-7dnp6", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-7dnp6", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc003024498), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0028ce3c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003024520)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003024540)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc003024548), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00302454c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715904869, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715904869, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715904869, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715904869, loc:(*time.Location)(0x7ea48a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.3.65", PodIP:"10.44.0.1", StartTime:(*v1.Time)(0xc002d36280), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0008ec310)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0008ec380)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://cc44ec1d486d6a323ad08858c222e6af8683f81efe518df93978d3fc54ce27a1"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002d362c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002d362a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:28:53.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-5443" for this suite.
Jan 29 14:29:15.877: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:29:15.989: INFO: namespace init-container-5443 deletion completed in 22.150866822s

• [SLOW TEST:86.636 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:29:15.991: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Jan 29 14:29:16.126: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 29 14:29:16.139: INFO: Waiting for terminating namespaces to be deleted...
Jan 29 14:29:16.142: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Jan 29 14:29:16.156: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Jan 29 14:29:16.156: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 29 14:29:16.156: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Jan 29 14:29:16.156: INFO: 	Container weave ready: true, restart count 0
Jan 29 14:29:16.156: INFO: 	Container weave-npc ready: true, restart count 0
Jan 29 14:29:16.156: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Jan 29 14:29:16.165: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Jan 29 14:29:16.165: INFO: 	Container kube-apiserver ready: true, restart count 0
Jan 29 14:29:16.165: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Jan 29 14:29:16.165: INFO: 	Container kube-scheduler ready: true, restart count 13
Jan 29 14:29:16.165: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan 29 14:29:16.165: INFO: 	Container coredns ready: true, restart count 0
Jan 29 14:29:16.165: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Jan 29 14:29:16.165: INFO: 	Container etcd ready: true, restart count 0
Jan 29 14:29:16.165: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Jan 29 14:29:16.165: INFO: 	Container weave ready: true, restart count 0
Jan 29 14:29:16.165: INFO: 	Container weave-npc ready: true, restart count 0
Jan 29 14:29:16.165: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan 29 14:29:16.165: INFO: 	Container coredns ready: true, restart count 0
Jan 29 14:29:16.165: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Jan 29 14:29:16.165: INFO: 	Container kube-controller-manager ready: true, restart count 19
Jan 29 14:29:16.165: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Jan 29 14:29:16.165: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15ee61e52f8615ab], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:29:17.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-3052" for this suite.
Jan 29 14:29:23.280: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:29:23.448: INFO: namespace sched-pred-3052 deletion completed in 6.201333247s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:7.458 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:29:23.449: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan 29 14:29:31.682: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:29:31.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-6537" for this suite.
Jan 29 14:29:37.881: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:29:37.971: INFO: namespace container-runtime-6537 deletion completed in 6.220341753s

• [SLOW TEST:14.521 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:29:37.971: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Jan 29 14:29:38.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-737'
Jan 29 14:29:40.814: INFO: stderr: ""
Jan 29 14:29:40.814: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 29 14:29:40.815: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-737'
Jan 29 14:29:41.250: INFO: stderr: ""
Jan 29 14:29:41.251: INFO: stdout: "update-demo-nautilus-bmffn update-demo-nautilus-ps6js "
Jan 29 14:29:41.251: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bmffn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-737'
Jan 29 14:29:41.416: INFO: stderr: ""
Jan 29 14:29:41.417: INFO: stdout: ""
Jan 29 14:29:41.417: INFO: update-demo-nautilus-bmffn is created but not running
Jan 29 14:29:46.417: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-737'
Jan 29 14:29:46.704: INFO: stderr: ""
Jan 29 14:29:46.704: INFO: stdout: "update-demo-nautilus-bmffn update-demo-nautilus-ps6js "
Jan 29 14:29:46.704: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bmffn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-737'
Jan 29 14:29:47.198: INFO: stderr: ""
Jan 29 14:29:47.198: INFO: stdout: ""
Jan 29 14:29:47.198: INFO: update-demo-nautilus-bmffn is created but not running
Jan 29 14:29:52.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-737'
Jan 29 14:29:52.393: INFO: stderr: ""
Jan 29 14:29:52.393: INFO: stdout: "update-demo-nautilus-bmffn update-demo-nautilus-ps6js "
Jan 29 14:29:52.394: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bmffn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-737'
Jan 29 14:29:52.636: INFO: stderr: ""
Jan 29 14:29:52.636: INFO: stdout: "true"
Jan 29 14:29:52.637: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bmffn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-737'
Jan 29 14:29:52.756: INFO: stderr: ""
Jan 29 14:29:52.756: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 29 14:29:52.757: INFO: validating pod update-demo-nautilus-bmffn
Jan 29 14:29:52.765: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 29 14:29:52.765: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 29 14:29:52.765: INFO: update-demo-nautilus-bmffn is verified up and running
Jan 29 14:29:52.765: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ps6js -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-737'
Jan 29 14:29:52.852: INFO: stderr: ""
Jan 29 14:29:52.852: INFO: stdout: "true"
Jan 29 14:29:52.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ps6js -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-737'
Jan 29 14:29:53.015: INFO: stderr: ""
Jan 29 14:29:53.015: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 29 14:29:53.015: INFO: validating pod update-demo-nautilus-ps6js
Jan 29 14:29:53.036: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 29 14:29:53.036: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 29 14:29:53.036: INFO: update-demo-nautilus-ps6js is verified up and running
STEP: scaling down the replication controller
Jan 29 14:29:53.038: INFO: scanned /root for discovery docs: 
Jan 29 14:29:53.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-737'
Jan 29 14:29:54.480: INFO: stderr: ""
Jan 29 14:29:54.481: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 29 14:29:54.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-737'
Jan 29 14:29:54.722: INFO: stderr: ""
Jan 29 14:29:54.722: INFO: stdout: "update-demo-nautilus-bmffn update-demo-nautilus-ps6js "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jan 29 14:29:59.723: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-737'
Jan 29 14:30:00.141: INFO: stderr: ""
Jan 29 14:30:00.141: INFO: stdout: "update-demo-nautilus-bmffn update-demo-nautilus-ps6js "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jan 29 14:30:05.143: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-737'
Jan 29 14:30:05.270: INFO: stderr: ""
Jan 29 14:30:05.270: INFO: stdout: "update-demo-nautilus-bmffn update-demo-nautilus-ps6js "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jan 29 14:30:10.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-737'
Jan 29 14:30:10.503: INFO: stderr: ""
Jan 29 14:30:10.503: INFO: stdout: "update-demo-nautilus-ps6js "
Jan 29 14:30:10.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ps6js -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-737'
Jan 29 14:30:10.644: INFO: stderr: ""
Jan 29 14:30:10.644: INFO: stdout: "true"
Jan 29 14:30:10.645: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ps6js -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-737'
Jan 29 14:30:10.776: INFO: stderr: ""
Jan 29 14:30:10.776: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 29 14:30:10.776: INFO: validating pod update-demo-nautilus-ps6js
Jan 29 14:30:10.785: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 29 14:30:10.785: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 29 14:30:10.785: INFO: update-demo-nautilus-ps6js is verified up and running
STEP: scaling up the replication controller
Jan 29 14:30:10.786: INFO: scanned /root for discovery docs: 
Jan 29 14:30:10.787: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-737'
Jan 29 14:30:12.047: INFO: stderr: ""
Jan 29 14:30:12.047: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 29 14:30:12.048: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-737'
Jan 29 14:30:12.235: INFO: stderr: ""
Jan 29 14:30:12.235: INFO: stdout: "update-demo-nautilus-ps6js update-demo-nautilus-z7nq4 "
Jan 29 14:30:12.235: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ps6js -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-737'
Jan 29 14:30:12.378: INFO: stderr: ""
Jan 29 14:30:12.378: INFO: stdout: "true"
Jan 29 14:30:12.378: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ps6js -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-737'
Jan 29 14:30:12.677: INFO: stderr: ""
Jan 29 14:30:12.678: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 29 14:30:12.678: INFO: validating pod update-demo-nautilus-ps6js
Jan 29 14:30:12.713: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 29 14:30:12.713: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 29 14:30:12.713: INFO: update-demo-nautilus-ps6js is verified up and running
Jan 29 14:30:12.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-z7nq4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-737'
Jan 29 14:30:12.969: INFO: stderr: ""
Jan 29 14:30:12.969: INFO: stdout: ""
Jan 29 14:30:12.969: INFO: update-demo-nautilus-z7nq4 is created but not running
Jan 29 14:30:17.970: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-737'
Jan 29 14:30:18.153: INFO: stderr: ""
Jan 29 14:30:18.153: INFO: stdout: "update-demo-nautilus-ps6js update-demo-nautilus-z7nq4 "
Jan 29 14:30:18.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ps6js -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-737'
Jan 29 14:30:18.373: INFO: stderr: ""
Jan 29 14:30:18.373: INFO: stdout: "true"
Jan 29 14:30:18.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ps6js -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-737'
Jan 29 14:30:18.526: INFO: stderr: ""
Jan 29 14:30:18.526: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 29 14:30:18.527: INFO: validating pod update-demo-nautilus-ps6js
Jan 29 14:30:18.548: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 29 14:30:18.548: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 29 14:30:18.548: INFO: update-demo-nautilus-ps6js is verified up and running
Jan 29 14:30:18.548: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-z7nq4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-737'
Jan 29 14:30:18.710: INFO: stderr: ""
Jan 29 14:30:18.710: INFO: stdout: "true"
Jan 29 14:30:18.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-z7nq4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-737'
Jan 29 14:30:18.840: INFO: stderr: ""
Jan 29 14:30:18.840: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 29 14:30:18.840: INFO: validating pod update-demo-nautilus-z7nq4
Jan 29 14:30:18.846: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 29 14:30:18.846: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 29 14:30:18.847: INFO: update-demo-nautilus-z7nq4 is verified up and running
STEP: using delete to clean up resources
Jan 29 14:30:18.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-737'
Jan 29 14:30:19.023: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 29 14:30:19.023: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan 29 14:30:19.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-737'
Jan 29 14:30:19.170: INFO: stderr: "No resources found.\n"
Jan 29 14:30:19.170: INFO: stdout: ""
Jan 29 14:30:19.170: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-737 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 29 14:30:19.326: INFO: stderr: ""
Jan 29 14:30:19.327: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:30:19.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-737" for this suite.
Jan 29 14:30:41.356: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:30:41.467: INFO: namespace kubectl-737 deletion completed in 22.132162456s

• [SLOW TEST:63.496 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:30:41.467: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod test-webserver-50f35ae5-3d95-4aa8-aa14-1230b3ffe4ac in namespace container-probe-2649
Jan 29 14:30:51.637: INFO: Started pod test-webserver-50f35ae5-3d95-4aa8-aa14-1230b3ffe4ac in namespace container-probe-2649
STEP: checking the pod's current state and verifying that restartCount is present
Jan 29 14:30:51.646: INFO: Initial restart count of pod test-webserver-50f35ae5-3d95-4aa8-aa14-1230b3ffe4ac is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:34:53.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2649" for this suite.
Jan 29 14:34:59.455: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:34:59.595: INFO: namespace container-probe-2649 deletion completed in 6.228065678s

• [SLOW TEST:258.128 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:34:59.595: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-741fb769-cc8b-43c3-8084-d235938ece5d
STEP: Creating a pod to test consume secrets
Jan 29 14:34:59.758: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d0304294-08cd-4285-b954-c1c44029d684" in namespace "projected-6758" to be "success or failure"
Jan 29 14:34:59.765: INFO: Pod "pod-projected-secrets-d0304294-08cd-4285-b954-c1c44029d684": Phase="Pending", Reason="", readiness=false. Elapsed: 6.516723ms
Jan 29 14:35:01.775: INFO: Pod "pod-projected-secrets-d0304294-08cd-4285-b954-c1c44029d684": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016569107s
Jan 29 14:35:03.836: INFO: Pod "pod-projected-secrets-d0304294-08cd-4285-b954-c1c44029d684": Phase="Pending", Reason="", readiness=false. Elapsed: 4.078200014s
Jan 29 14:35:05.859: INFO: Pod "pod-projected-secrets-d0304294-08cd-4285-b954-c1c44029d684": Phase="Pending", Reason="", readiness=false. Elapsed: 6.100722668s
Jan 29 14:35:07.883: INFO: Pod "pod-projected-secrets-d0304294-08cd-4285-b954-c1c44029d684": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.12469444s
STEP: Saw pod success
Jan 29 14:35:07.883: INFO: Pod "pod-projected-secrets-d0304294-08cd-4285-b954-c1c44029d684" satisfied condition "success or failure"
Jan 29 14:35:07.901: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-d0304294-08cd-4285-b954-c1c44029d684 container projected-secret-volume-test: 
STEP: delete the pod
Jan 29 14:35:08.052: INFO: Waiting for pod pod-projected-secrets-d0304294-08cd-4285-b954-c1c44029d684 to disappear
Jan 29 14:35:08.062: INFO: Pod pod-projected-secrets-d0304294-08cd-4285-b954-c1c44029d684 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:35:08.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6758" for this suite.
Jan 29 14:35:14.142: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:35:14.230: INFO: namespace projected-6758 deletion completed in 6.128872896s

• [SLOW TEST:14.635 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:35:14.230: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2044.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2044.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 29 14:35:26.411: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-2044/dns-test-b1780797-57b0-4ca5-ad4a-891da04b8056: the server could not find the requested resource (get pods dns-test-b1780797-57b0-4ca5-ad4a-891da04b8056)
Jan 29 14:35:26.421: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-2044/dns-test-b1780797-57b0-4ca5-ad4a-891da04b8056: the server could not find the requested resource (get pods dns-test-b1780797-57b0-4ca5-ad4a-891da04b8056)
Jan 29 14:35:26.427: INFO: Unable to read wheezy_udp@PodARecord from pod dns-2044/dns-test-b1780797-57b0-4ca5-ad4a-891da04b8056: the server could not find the requested resource (get pods dns-test-b1780797-57b0-4ca5-ad4a-891da04b8056)
Jan 29 14:35:26.430: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-2044/dns-test-b1780797-57b0-4ca5-ad4a-891da04b8056: the server could not find the requested resource (get pods dns-test-b1780797-57b0-4ca5-ad4a-891da04b8056)
Jan 29 14:35:26.433: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-2044/dns-test-b1780797-57b0-4ca5-ad4a-891da04b8056: the server could not find the requested resource (get pods dns-test-b1780797-57b0-4ca5-ad4a-891da04b8056)
Jan 29 14:35:26.436: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-2044/dns-test-b1780797-57b0-4ca5-ad4a-891da04b8056: the server could not find the requested resource (get pods dns-test-b1780797-57b0-4ca5-ad4a-891da04b8056)
Jan 29 14:35:26.439: INFO: Unable to read jessie_udp@PodARecord from pod dns-2044/dns-test-b1780797-57b0-4ca5-ad4a-891da04b8056: the server could not find the requested resource (get pods dns-test-b1780797-57b0-4ca5-ad4a-891da04b8056)
Jan 29 14:35:26.443: INFO: Unable to read jessie_tcp@PodARecord from pod dns-2044/dns-test-b1780797-57b0-4ca5-ad4a-891da04b8056: the server could not find the requested resource (get pods dns-test-b1780797-57b0-4ca5-ad4a-891da04b8056)
Jan 29 14:35:26.443: INFO: Lookups using dns-2044/dns-test-b1780797-57b0-4ca5-ad4a-891da04b8056 failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord]

Jan 29 14:35:31.497: INFO: DNS probes using dns-2044/dns-test-b1780797-57b0-4ca5-ad4a-891da04b8056 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:35:31.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-2044" for this suite.
Jan 29 14:35:37.642: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:35:37.766: INFO: namespace dns-2044 deletion completed in 6.162259874s

• [SLOW TEST:23.536 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:35:37.768: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating api versions
Jan 29 14:35:37.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Jan 29 14:35:38.138: INFO: stderr: ""
Jan 29 14:35:38.139: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:35:38.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7726" for this suite.
Jan 29 14:35:44.246: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:35:44.449: INFO: namespace kubectl-7726 deletion completed in 6.288921414s

• [SLOW TEST:6.682 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:35:44.450: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 29 14:35:44.636: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Jan 29 14:35:44.658: INFO: Number of nodes with available pods: 0
Jan 29 14:35:44.658: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Jan 29 14:35:44.783: INFO: Number of nodes with available pods: 0
Jan 29 14:35:44.783: INFO: Node iruya-node is running more than one daemon pod
Jan 29 14:35:45.796: INFO: Number of nodes with available pods: 0
Jan 29 14:35:45.796: INFO: Node iruya-node is running more than one daemon pod
Jan 29 14:35:46.793: INFO: Number of nodes with available pods: 0
Jan 29 14:35:46.794: INFO: Node iruya-node is running more than one daemon pod
Jan 29 14:35:47.795: INFO: Number of nodes with available pods: 0
Jan 29 14:35:47.795: INFO: Node iruya-node is running more than one daemon pod
Jan 29 14:35:48.792: INFO: Number of nodes with available pods: 0
Jan 29 14:35:48.792: INFO: Node iruya-node is running more than one daemon pod
Jan 29 14:35:49.789: INFO: Number of nodes with available pods: 0
Jan 29 14:35:49.789: INFO: Node iruya-node is running more than one daemon pod
Jan 29 14:35:50.792: INFO: Number of nodes with available pods: 0
Jan 29 14:35:50.792: INFO: Node iruya-node is running more than one daemon pod
Jan 29 14:35:51.796: INFO: Number of nodes with available pods: 0
Jan 29 14:35:51.796: INFO: Node iruya-node is running more than one daemon pod
Jan 29 14:35:52.809: INFO: Number of nodes with available pods: 0
Jan 29 14:35:52.809: INFO: Node iruya-node is running more than one daemon pod
Jan 29 14:35:53.795: INFO: Number of nodes with available pods: 1
Jan 29 14:35:53.795: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Jan 29 14:35:53.903: INFO: Number of nodes with available pods: 1
Jan 29 14:35:53.903: INFO: Number of running nodes: 0, number of available pods: 1
Jan 29 14:35:54.921: INFO: Number of nodes with available pods: 0
Jan 29 14:35:54.921: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Jan 29 14:35:54.942: INFO: Number of nodes with available pods: 0
Jan 29 14:35:54.942: INFO: Node iruya-node is running more than one daemon pod
Jan 29 14:35:55.949: INFO: Number of nodes with available pods: 0
Jan 29 14:35:55.949: INFO: Node iruya-node is running more than one daemon pod
Jan 29 14:35:56.967: INFO: Number of nodes with available pods: 0
Jan 29 14:35:56.967: INFO: Node iruya-node is running more than one daemon pod
Jan 29 14:35:57.953: INFO: Number of nodes with available pods: 0
Jan 29 14:35:57.953: INFO: Node iruya-node is running more than one daemon pod
Jan 29 14:35:58.953: INFO: Number of nodes with available pods: 0
Jan 29 14:35:58.953: INFO: Node iruya-node is running more than one daemon pod
Jan 29 14:35:59.965: INFO: Number of nodes with available pods: 0
Jan 29 14:35:59.965: INFO: Node iruya-node is running more than one daemon pod
Jan 29 14:36:00.948: INFO: Number of nodes with available pods: 0
Jan 29 14:36:00.948: INFO: Node iruya-node is running more than one daemon pod
Jan 29 14:36:01.949: INFO: Number of nodes with available pods: 0
Jan 29 14:36:01.949: INFO: Node iruya-node is running more than one daemon pod
Jan 29 14:36:02.950: INFO: Number of nodes with available pods: 0
Jan 29 14:36:02.950: INFO: Node iruya-node is running more than one daemon pod
Jan 29 14:36:03.955: INFO: Number of nodes with available pods: 0
Jan 29 14:36:03.955: INFO: Node iruya-node is running more than one daemon pod
Jan 29 14:36:04.950: INFO: Number of nodes with available pods: 0
Jan 29 14:36:04.951: INFO: Node iruya-node is running more than one daemon pod
Jan 29 14:36:05.953: INFO: Number of nodes with available pods: 0
Jan 29 14:36:05.954: INFO: Node iruya-node is running more than one daemon pod
Jan 29 14:36:06.951: INFO: Number of nodes with available pods: 0
Jan 29 14:36:06.951: INFO: Node iruya-node is running more than one daemon pod
Jan 29 14:36:07.951: INFO: Number of nodes with available pods: 0
Jan 29 14:36:07.951: INFO: Node iruya-node is running more than one daemon pod
Jan 29 14:36:08.949: INFO: Number of nodes with available pods: 0
Jan 29 14:36:08.949: INFO: Node iruya-node is running more than one daemon pod
Jan 29 14:36:10.003: INFO: Number of nodes with available pods: 0
Jan 29 14:36:10.003: INFO: Node iruya-node is running more than one daemon pod
Jan 29 14:36:10.950: INFO: Number of nodes with available pods: 0
Jan 29 14:36:10.950: INFO: Node iruya-node is running more than one daemon pod
Jan 29 14:36:11.956: INFO: Number of nodes with available pods: 0
Jan 29 14:36:11.956: INFO: Node iruya-node is running more than one daemon pod
Jan 29 14:36:12.947: INFO: Number of nodes with available pods: 0
Jan 29 14:36:12.947: INFO: Node iruya-node is running more than one daemon pod
Jan 29 14:36:13.955: INFO: Number of nodes with available pods: 1
Jan 29 14:36:13.956: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8978, will wait for the garbage collector to delete the pods
Jan 29 14:36:14.028: INFO: Deleting DaemonSet.extensions daemon-set took: 9.748722ms
Jan 29 14:36:14.329: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.416637ms
Jan 29 14:36:20.834: INFO: Number of nodes with available pods: 0
Jan 29 14:36:20.834: INFO: Number of running nodes: 0, number of available pods: 0
Jan 29 14:36:20.836: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8978/daemonsets","resourceVersion":"22325181"},"items":null}

Jan 29 14:36:20.839: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8978/pods","resourceVersion":"22325181"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:36:20.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-8978" for this suite.
Jan 29 14:36:26.948: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:36:27.069: INFO: namespace daemonsets-8978 deletion completed in 6.183716613s

• [SLOW TEST:42.619 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:36:27.069: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating all guestbook components
Jan 29 14:36:27.151: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Jan 29 14:36:27.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2564'
Jan 29 14:36:27.571: INFO: stderr: ""
Jan 29 14:36:27.571: INFO: stdout: "service/redis-slave created\n"
Jan 29 14:36:27.572: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Jan 29 14:36:27.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2564'
Jan 29 14:36:28.216: INFO: stderr: ""
Jan 29 14:36:28.217: INFO: stdout: "service/redis-master created\n"
Jan 29 14:36:28.217: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Jan 29 14:36:28.218: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2564'
Jan 29 14:36:28.897: INFO: stderr: ""
Jan 29 14:36:28.897: INFO: stdout: "service/frontend created\n"
Jan 29 14:36:28.898: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Jan 29 14:36:28.898: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2564'
Jan 29 14:36:29.471: INFO: stderr: ""
Jan 29 14:36:29.472: INFO: stdout: "deployment.apps/frontend created\n"
Jan 29 14:36:29.472: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Jan 29 14:36:29.472: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2564'
Jan 29 14:36:30.168: INFO: stderr: ""
Jan 29 14:36:30.168: INFO: stdout: "deployment.apps/redis-master created\n"
Jan 29 14:36:30.168: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: redis
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Jan 29 14:36:30.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2564'
Jan 29 14:36:31.146: INFO: stderr: ""
Jan 29 14:36:31.146: INFO: stdout: "deployment.apps/redis-slave created\n"
STEP: validating guestbook app
Jan 29 14:36:31.146: INFO: Waiting for all frontend pods to be Running.
Jan 29 14:36:56.199: INFO: Waiting for frontend to serve content.
Jan 29 14:36:56.615: INFO: Trying to add a new entry to the guestbook.
Jan 29 14:36:56.703: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Jan 29 14:36:56.770: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2564'
Jan 29 14:36:57.037: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 29 14:36:57.037: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Jan 29 14:36:57.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2564'
Jan 29 14:36:57.272: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 29 14:36:57.272: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Jan 29 14:36:57.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2564'
Jan 29 14:36:57.442: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 29 14:36:57.442: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jan 29 14:36:57.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2564'
Jan 29 14:36:57.561: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 29 14:36:57.561: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jan 29 14:36:57.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2564'
Jan 29 14:36:57.697: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 29 14:36:57.697: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Jan 29 14:36:57.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2564'
Jan 29 14:36:57.879: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 29 14:36:57.880: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:36:57.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2564" for this suite.
Jan 29 14:37:38.121: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:37:38.342: INFO: namespace kubectl-2564 deletion completed in 40.385004129s

• [SLOW TEST:71.273 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:37:38.343: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 29 14:37:38.430: INFO: Waiting up to 5m0s for pod "downwardapi-volume-674a1e56-c9fd-4faa-8dc3-f603d02e80cf" in namespace "downward-api-9099" to be "success or failure"
Jan 29 14:37:38.522: INFO: Pod "downwardapi-volume-674a1e56-c9fd-4faa-8dc3-f603d02e80cf": Phase="Pending", Reason="", readiness=false. Elapsed: 92.361309ms
Jan 29 14:37:40.561: INFO: Pod "downwardapi-volume-674a1e56-c9fd-4faa-8dc3-f603d02e80cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.130559758s
Jan 29 14:37:42.576: INFO: Pod "downwardapi-volume-674a1e56-c9fd-4faa-8dc3-f603d02e80cf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.146151787s
Jan 29 14:37:44.592: INFO: Pod "downwardapi-volume-674a1e56-c9fd-4faa-8dc3-f603d02e80cf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.162398042s
Jan 29 14:37:46.601: INFO: Pod "downwardapi-volume-674a1e56-c9fd-4faa-8dc3-f603d02e80cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.170840067s
STEP: Saw pod success
Jan 29 14:37:46.601: INFO: Pod "downwardapi-volume-674a1e56-c9fd-4faa-8dc3-f603d02e80cf" satisfied condition "success or failure"
Jan 29 14:37:46.608: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-674a1e56-c9fd-4faa-8dc3-f603d02e80cf container client-container: 
STEP: delete the pod
Jan 29 14:37:46.693: INFO: Waiting for pod downwardapi-volume-674a1e56-c9fd-4faa-8dc3-f603d02e80cf to disappear
Jan 29 14:37:46.704: INFO: Pod downwardapi-volume-674a1e56-c9fd-4faa-8dc3-f603d02e80cf no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:37:46.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9099" for this suite.
Jan 29 14:37:52.728: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:37:52.852: INFO: namespace downward-api-9099 deletion completed in 6.143720797s

• [SLOW TEST:14.510 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:37:52.853: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 29 14:37:52.973: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0ba39b63-5afc-40d6-9a83-e138e46bf0ba" in namespace "projected-302" to be "success or failure"
Jan 29 14:37:52.981: INFO: Pod "downwardapi-volume-0ba39b63-5afc-40d6-9a83-e138e46bf0ba": Phase="Pending", Reason="", readiness=false. Elapsed: 7.966003ms
Jan 29 14:37:54.989: INFO: Pod "downwardapi-volume-0ba39b63-5afc-40d6-9a83-e138e46bf0ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015617155s
Jan 29 14:37:57.001: INFO: Pod "downwardapi-volume-0ba39b63-5afc-40d6-9a83-e138e46bf0ba": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027930535s
Jan 29 14:37:59.011: INFO: Pod "downwardapi-volume-0ba39b63-5afc-40d6-9a83-e138e46bf0ba": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037995708s
Jan 29 14:38:01.017: INFO: Pod "downwardapi-volume-0ba39b63-5afc-40d6-9a83-e138e46bf0ba": Phase="Pending", Reason="", readiness=false. Elapsed: 8.043763982s
Jan 29 14:38:03.022: INFO: Pod "downwardapi-volume-0ba39b63-5afc-40d6-9a83-e138e46bf0ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.049533785s
STEP: Saw pod success
Jan 29 14:38:03.023: INFO: Pod "downwardapi-volume-0ba39b63-5afc-40d6-9a83-e138e46bf0ba" satisfied condition "success or failure"
Jan 29 14:38:03.025: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-0ba39b63-5afc-40d6-9a83-e138e46bf0ba container client-container: 
STEP: delete the pod
Jan 29 14:38:03.340: INFO: Waiting for pod downwardapi-volume-0ba39b63-5afc-40d6-9a83-e138e46bf0ba to disappear
Jan 29 14:38:03.492: INFO: Pod downwardapi-volume-0ba39b63-5afc-40d6-9a83-e138e46bf0ba no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:38:03.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-302" for this suite.
Jan 29 14:38:09.539: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:38:09.619: INFO: namespace projected-302 deletion completed in 6.109959747s

• [SLOW TEST:16.765 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:38:09.619: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-9055
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 29 14:38:09.689: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 29 14:38:43.858: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-9055 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 29 14:38:43.858: INFO: >>> kubeConfig: /root/.kube/config
I0129 14:38:43.964468       8 log.go:172] (0xc000d1a370) (0xc0017df040) Create stream
I0129 14:38:43.964704       8 log.go:172] (0xc000d1a370) (0xc0017df040) Stream added, broadcasting: 1
I0129 14:38:43.975024       8 log.go:172] (0xc000d1a370) Reply frame received for 1
I0129 14:38:43.975131       8 log.go:172] (0xc000d1a370) (0xc00037f4a0) Create stream
I0129 14:38:43.975144       8 log.go:172] (0xc000d1a370) (0xc00037f4a0) Stream added, broadcasting: 3
I0129 14:38:43.976900       8 log.go:172] (0xc000d1a370) Reply frame received for 3
I0129 14:38:43.976932       8 log.go:172] (0xc000d1a370) (0xc00237b720) Create stream
I0129 14:38:43.976954       8 log.go:172] (0xc000d1a370) (0xc00237b720) Stream added, broadcasting: 5
I0129 14:38:43.978162       8 log.go:172] (0xc000d1a370) Reply frame received for 5
I0129 14:38:44.236853       8 log.go:172] (0xc000d1a370) Data frame received for 3
I0129 14:38:44.237016       8 log.go:172] (0xc00037f4a0) (3) Data frame handling
I0129 14:38:44.237072       8 log.go:172] (0xc00037f4a0) (3) Data frame sent
I0129 14:38:44.431815       8 log.go:172] (0xc000d1a370) (0xc00037f4a0) Stream removed, broadcasting: 3
I0129 14:38:44.431998       8 log.go:172] (0xc000d1a370) Data frame received for 1
I0129 14:38:44.432119       8 log.go:172] (0xc000d1a370) (0xc00237b720) Stream removed, broadcasting: 5
I0129 14:38:44.432204       8 log.go:172] (0xc0017df040) (1) Data frame handling
I0129 14:38:44.432283       8 log.go:172] (0xc0017df040) (1) Data frame sent
I0129 14:38:44.432345       8 log.go:172] (0xc000d1a370) (0xc0017df040) Stream removed, broadcasting: 1
I0129 14:38:44.432397       8 log.go:172] (0xc000d1a370) Go away received
I0129 14:38:44.432900       8 log.go:172] (0xc000d1a370) (0xc0017df040) Stream removed, broadcasting: 1
I0129 14:38:44.432922       8 log.go:172] (0xc000d1a370) (0xc00037f4a0) Stream removed, broadcasting: 3
I0129 14:38:44.432933       8 log.go:172] (0xc000d1a370) (0xc00237b720) Stream removed, broadcasting: 5
Jan 29 14:38:44.433: INFO: Waiting for endpoints: map[]
Jan 29 14:38:44.439: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-9055 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 29 14:38:44.439: INFO: >>> kubeConfig: /root/.kube/config
I0129 14:38:44.502623       8 log.go:172] (0xc000d1af20) (0xc0017df860) Create stream
I0129 14:38:44.502751       8 log.go:172] (0xc000d1af20) (0xc0017df860) Stream added, broadcasting: 1
I0129 14:38:44.514701       8 log.go:172] (0xc000d1af20) Reply frame received for 1
I0129 14:38:44.514812       8 log.go:172] (0xc000d1af20) (0xc00237b7c0) Create stream
I0129 14:38:44.514823       8 log.go:172] (0xc000d1af20) (0xc00237b7c0) Stream added, broadcasting: 3
I0129 14:38:44.516812       8 log.go:172] (0xc000d1af20) Reply frame received for 3
I0129 14:38:44.516927       8 log.go:172] (0xc000d1af20) (0xc00037fae0) Create stream
I0129 14:38:44.516940       8 log.go:172] (0xc000d1af20) (0xc00037fae0) Stream added, broadcasting: 5
I0129 14:38:44.519530       8 log.go:172] (0xc000d1af20) Reply frame received for 5
I0129 14:38:44.663001       8 log.go:172] (0xc000d1af20) Data frame received for 3
I0129 14:38:44.663070       8 log.go:172] (0xc00237b7c0) (3) Data frame handling
I0129 14:38:44.663101       8 log.go:172] (0xc00237b7c0) (3) Data frame sent
I0129 14:38:44.763723       8 log.go:172] (0xc000d1af20) (0xc00237b7c0) Stream removed, broadcasting: 3
I0129 14:38:44.763895       8 log.go:172] (0xc000d1af20) Data frame received for 1
I0129 14:38:44.763918       8 log.go:172] (0xc0017df860) (1) Data frame handling
I0129 14:38:44.764200       8 log.go:172] (0xc0017df860) (1) Data frame sent
I0129 14:38:44.764279       8 log.go:172] (0xc000d1af20) (0xc00037fae0) Stream removed, broadcasting: 5
I0129 14:38:44.764348       8 log.go:172] (0xc000d1af20) (0xc0017df860) Stream removed, broadcasting: 1
I0129 14:38:44.764373       8 log.go:172] (0xc000d1af20) Go away received
I0129 14:38:44.764957       8 log.go:172] (0xc000d1af20) (0xc0017df860) Stream removed, broadcasting: 1
I0129 14:38:44.764980       8 log.go:172] (0xc000d1af20) (0xc00237b7c0) Stream removed, broadcasting: 3
I0129 14:38:44.764992       8 log.go:172] (0xc000d1af20) (0xc00037fae0) Stream removed, broadcasting: 5
Jan 29 14:38:44.765: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:38:44.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-9055" for this suite.
Jan 29 14:39:08.802: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:39:08.885: INFO: namespace pod-network-test-9055 deletion completed in 24.109469375s

• [SLOW TEST:59.266 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:39:08.886: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Jan 29 14:39:09.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5834'
Jan 29 14:39:09.451: INFO: stderr: ""
Jan 29 14:39:09.451: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Jan 29 14:39:10.466: INFO: Selector matched 1 pods for map[app:redis]
Jan 29 14:39:10.466: INFO: Found 0 / 1
Jan 29 14:39:11.537: INFO: Selector matched 1 pods for map[app:redis]
Jan 29 14:39:11.537: INFO: Found 0 / 1
Jan 29 14:39:12.462: INFO: Selector matched 1 pods for map[app:redis]
Jan 29 14:39:12.462: INFO: Found 0 / 1
Jan 29 14:39:13.458: INFO: Selector matched 1 pods for map[app:redis]
Jan 29 14:39:13.458: INFO: Found 0 / 1
Jan 29 14:39:14.462: INFO: Selector matched 1 pods for map[app:redis]
Jan 29 14:39:14.462: INFO: Found 0 / 1
Jan 29 14:39:15.462: INFO: Selector matched 1 pods for map[app:redis]
Jan 29 14:39:15.462: INFO: Found 0 / 1
Jan 29 14:39:16.461: INFO: Selector matched 1 pods for map[app:redis]
Jan 29 14:39:16.461: INFO: Found 0 / 1
Jan 29 14:39:17.461: INFO: Selector matched 1 pods for map[app:redis]
Jan 29 14:39:17.461: INFO: Found 1 / 1
Jan 29 14:39:17.461: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Jan 29 14:39:17.467: INFO: Selector matched 1 pods for map[app:redis]
Jan 29 14:39:17.467: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan 29 14:39:17.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-ttdql --namespace=kubectl-5834 -p {"metadata":{"annotations":{"x":"y"}}}'
Jan 29 14:39:17.705: INFO: stderr: ""
Jan 29 14:39:17.705: INFO: stdout: "pod/redis-master-ttdql patched\n"
STEP: checking annotations
Jan 29 14:39:17.734: INFO: Selector matched 1 pods for map[app:redis]
Jan 29 14:39:17.734: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:39:17.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5834" for this suite.
Jan 29 14:39:39.760: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:39:39.962: INFO: namespace kubectl-5834 deletion completed in 22.222932981s

• [SLOW TEST:31.076 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:39:39.962: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-99adf63c-61d4-4cdd-8582-fd9764974542
STEP: Creating a pod to test consume configMaps
Jan 29 14:39:40.132: INFO: Waiting up to 5m0s for pod "pod-configmaps-df2954c3-1de8-43e7-9902-a8c3482e3f14" in namespace "configmap-5074" to be "success or failure"
Jan 29 14:39:40.148: INFO: Pod "pod-configmaps-df2954c3-1de8-43e7-9902-a8c3482e3f14": Phase="Pending", Reason="", readiness=false. Elapsed: 15.749611ms
Jan 29 14:39:42.159: INFO: Pod "pod-configmaps-df2954c3-1de8-43e7-9902-a8c3482e3f14": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026702505s
Jan 29 14:39:44.166: INFO: Pod "pod-configmaps-df2954c3-1de8-43e7-9902-a8c3482e3f14": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033179118s
Jan 29 14:39:46.172: INFO: Pod "pod-configmaps-df2954c3-1de8-43e7-9902-a8c3482e3f14": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039029051s
Jan 29 14:39:48.181: INFO: Pod "pod-configmaps-df2954c3-1de8-43e7-9902-a8c3482e3f14": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.047972897s
STEP: Saw pod success
Jan 29 14:39:48.181: INFO: Pod "pod-configmaps-df2954c3-1de8-43e7-9902-a8c3482e3f14" satisfied condition "success or failure"
Jan 29 14:39:48.183: INFO: Trying to get logs from node iruya-node pod pod-configmaps-df2954c3-1de8-43e7-9902-a8c3482e3f14 container configmap-volume-test: 
STEP: delete the pod
Jan 29 14:39:48.332: INFO: Waiting for pod pod-configmaps-df2954c3-1de8-43e7-9902-a8c3482e3f14 to disappear
Jan 29 14:39:48.351: INFO: Pod pod-configmaps-df2954c3-1de8-43e7-9902-a8c3482e3f14 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:39:48.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5074" for this suite.
Jan 29 14:39:54.419: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:39:54.523: INFO: namespace configmap-5074 deletion completed in 6.164494559s

• [SLOW TEST:14.561 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:39:54.523: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override all
Jan 29 14:39:54.638: INFO: Waiting up to 5m0s for pod "client-containers-0c8d52ce-6873-462c-8742-deb3288f18ba" in namespace "containers-9478" to be "success or failure"
Jan 29 14:39:54.665: INFO: Pod "client-containers-0c8d52ce-6873-462c-8742-deb3288f18ba": Phase="Pending", Reason="", readiness=false. Elapsed: 26.168515ms
Jan 29 14:39:56.795: INFO: Pod "client-containers-0c8d52ce-6873-462c-8742-deb3288f18ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.156852453s
Jan 29 14:39:58.806: INFO: Pod "client-containers-0c8d52ce-6873-462c-8742-deb3288f18ba": Phase="Pending", Reason="", readiness=false. Elapsed: 4.16718142s
Jan 29 14:40:00.822: INFO: Pod "client-containers-0c8d52ce-6873-462c-8742-deb3288f18ba": Phase="Pending", Reason="", readiness=false. Elapsed: 6.184001032s
Jan 29 14:40:02.849: INFO: Pod "client-containers-0c8d52ce-6873-462c-8742-deb3288f18ba": Phase="Pending", Reason="", readiness=false. Elapsed: 8.210454272s
Jan 29 14:40:04.867: INFO: Pod "client-containers-0c8d52ce-6873-462c-8742-deb3288f18ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.228710906s
STEP: Saw pod success
Jan 29 14:40:04.867: INFO: Pod "client-containers-0c8d52ce-6873-462c-8742-deb3288f18ba" satisfied condition "success or failure"
Jan 29 14:40:04.880: INFO: Trying to get logs from node iruya-node pod client-containers-0c8d52ce-6873-462c-8742-deb3288f18ba container test-container: 
STEP: delete the pod
Jan 29 14:40:05.122: INFO: Waiting for pod client-containers-0c8d52ce-6873-462c-8742-deb3288f18ba to disappear
Jan 29 14:40:05.138: INFO: Pod client-containers-0c8d52ce-6873-462c-8742-deb3288f18ba no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:40:05.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-9478" for this suite.
Jan 29 14:40:11.173: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:40:11.264: INFO: namespace containers-9478 deletion completed in 6.120800104s

• [SLOW TEST:16.740 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:40:11.264: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Jan 29 14:40:11.399: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-3007,SelfLink:/api/v1/namespaces/watch-3007/configmaps/e2e-watch-test-watch-closed,UID:b8781b47-c6a8-4933-9142-1d6849dd2ad9,ResourceVersion:22325890,Generation:0,CreationTimestamp:2020-01-29 14:40:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 29 14:40:11.399: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-3007,SelfLink:/api/v1/namespaces/watch-3007/configmaps/e2e-watch-test-watch-closed,UID:b8781b47-c6a8-4933-9142-1d6849dd2ad9,ResourceVersion:22325891,Generation:0,CreationTimestamp:2020-01-29 14:40:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Jan 29 14:40:11.413: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-3007,SelfLink:/api/v1/namespaces/watch-3007/configmaps/e2e-watch-test-watch-closed,UID:b8781b47-c6a8-4933-9142-1d6849dd2ad9,ResourceVersion:22325892,Generation:0,CreationTimestamp:2020-01-29 14:40:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 29 14:40:11.414: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-3007,SelfLink:/api/v1/namespaces/watch-3007/configmaps/e2e-watch-test-watch-closed,UID:b8781b47-c6a8-4933-9142-1d6849dd2ad9,ResourceVersion:22325893,Generation:0,CreationTimestamp:2020-01-29 14:40:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:40:11.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-3007" for this suite.
Jan 29 14:40:17.447: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:40:17.580: INFO: namespace watch-3007 deletion completed in 6.161393576s

• [SLOW TEST:6.316 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:40:17.581: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with configMap that has name projected-configmap-test-upd-d3ffbc11-897c-4cd0-8679-399927185621
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-d3ffbc11-897c-4cd0-8679-399927185621
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:40:30.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2753" for this suite.
Jan 29 14:40:52.058: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:40:52.254: INFO: namespace projected-2753 deletion completed in 22.234731754s

• [SLOW TEST:34.674 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:40:52.255: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Jan 29 14:40:52.334: INFO: Pod name pod-release: Found 0 pods out of 1
Jan 29 14:40:57.342: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:40:58.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-1328" for this suite.
Jan 29 14:41:06.485: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:41:06.594: INFO: namespace replication-controller-1328 deletion completed in 8.185003562s

• [SLOW TEST:14.340 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:41:06.595: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 29 14:41:06.776: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9f1e07bd-8b96-422e-8c6d-219c3ef93bf0" in namespace "downward-api-9921" to be "success or failure"
Jan 29 14:41:06.793: INFO: Pod "downwardapi-volume-9f1e07bd-8b96-422e-8c6d-219c3ef93bf0": Phase="Pending", Reason="", readiness=false. Elapsed: 15.497188ms
Jan 29 14:41:08.806: INFO: Pod "downwardapi-volume-9f1e07bd-8b96-422e-8c6d-219c3ef93bf0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02858054s
Jan 29 14:41:10.814: INFO: Pod "downwardapi-volume-9f1e07bd-8b96-422e-8c6d-219c3ef93bf0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037311274s
Jan 29 14:41:12.823: INFO: Pod "downwardapi-volume-9f1e07bd-8b96-422e-8c6d-219c3ef93bf0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045857909s
Jan 29 14:41:14.829: INFO: Pod "downwardapi-volume-9f1e07bd-8b96-422e-8c6d-219c3ef93bf0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.052083071s
Jan 29 14:41:16.838: INFO: Pod "downwardapi-volume-9f1e07bd-8b96-422e-8c6d-219c3ef93bf0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.060642198s
STEP: Saw pod success
Jan 29 14:41:16.838: INFO: Pod "downwardapi-volume-9f1e07bd-8b96-422e-8c6d-219c3ef93bf0" satisfied condition "success or failure"
Jan 29 14:41:16.842: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-9f1e07bd-8b96-422e-8c6d-219c3ef93bf0 container client-container: 
STEP: delete the pod
Jan 29 14:41:16.980: INFO: Waiting for pod downwardapi-volume-9f1e07bd-8b96-422e-8c6d-219c3ef93bf0 to disappear
Jan 29 14:41:16.997: INFO: Pod downwardapi-volume-9f1e07bd-8b96-422e-8c6d-219c3ef93bf0 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:41:16.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9921" for this suite.
Jan 29 14:41:23.056: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:41:23.154: INFO: namespace downward-api-9921 deletion completed in 6.119673053s

• [SLOW TEST:16.559 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:41:23.154: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-8425
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 29 14:41:23.262: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 29 14:41:55.486: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8425 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 29 14:41:55.486: INFO: >>> kubeConfig: /root/.kube/config
I0129 14:41:55.559516       8 log.go:172] (0xc002e05600) (0xc00233a640) Create stream
I0129 14:41:55.559589       8 log.go:172] (0xc002e05600) (0xc00233a640) Stream added, broadcasting: 1
I0129 14:41:55.572778       8 log.go:172] (0xc002e05600) Reply frame received for 1
I0129 14:41:55.572902       8 log.go:172] (0xc002e05600) (0xc001ca9220) Create stream
I0129 14:41:55.572921       8 log.go:172] (0xc002e05600) (0xc001ca9220) Stream added, broadcasting: 3
I0129 14:41:55.575339       8 log.go:172] (0xc002e05600) Reply frame received for 3
I0129 14:41:55.575400       8 log.go:172] (0xc002e05600) (0xc00049ff40) Create stream
I0129 14:41:55.575423       8 log.go:172] (0xc002e05600) (0xc00049ff40) Stream added, broadcasting: 5
I0129 14:41:55.577473       8 log.go:172] (0xc002e05600) Reply frame received for 5
I0129 14:41:56.730971       8 log.go:172] (0xc002e05600) Data frame received for 3
I0129 14:41:56.731057       8 log.go:172] (0xc001ca9220) (3) Data frame handling
I0129 14:41:56.731083       8 log.go:172] (0xc001ca9220) (3) Data frame sent
I0129 14:41:56.915245       8 log.go:172] (0xc002e05600) Data frame received for 1
I0129 14:41:56.915666       8 log.go:172] (0xc002e05600) (0xc001ca9220) Stream removed, broadcasting: 3
I0129 14:41:56.915747       8 log.go:172] (0xc00233a640) (1) Data frame handling
I0129 14:41:56.915829       8 log.go:172] (0xc00233a640) (1) Data frame sent
I0129 14:41:56.915995       8 log.go:172] (0xc002e05600) (0xc00049ff40) Stream removed, broadcasting: 5
I0129 14:41:56.916721       8 log.go:172] (0xc002e05600) (0xc00233a640) Stream removed, broadcasting: 1
I0129 14:41:56.916801       8 log.go:172] (0xc002e05600) Go away received
I0129 14:41:56.917379       8 log.go:172] (0xc002e05600) (0xc00233a640) Stream removed, broadcasting: 1
I0129 14:41:56.917442       8 log.go:172] (0xc002e05600) (0xc001ca9220) Stream removed, broadcasting: 3
I0129 14:41:56.917485       8 log.go:172] (0xc002e05600) (0xc00049ff40) Stream removed, broadcasting: 5
Jan 29 14:41:56.917: INFO: Found all expected endpoints: [netserver-0]
Jan 29 14:41:56.929: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.1 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8425 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 29 14:41:56.930: INFO: >>> kubeConfig: /root/.kube/config
I0129 14:41:57.022151       8 log.go:172] (0xc0030140b0) (0xc00233aa00) Create stream
I0129 14:41:57.022244       8 log.go:172] (0xc0030140b0) (0xc00233aa00) Stream added, broadcasting: 1
I0129 14:41:57.033340       8 log.go:172] (0xc0030140b0) Reply frame received for 1
I0129 14:41:57.033432       8 log.go:172] (0xc0030140b0) (0xc001f40140) Create stream
I0129 14:41:57.033446       8 log.go:172] (0xc0030140b0) (0xc001f40140) Stream added, broadcasting: 3
I0129 14:41:57.036157       8 log.go:172] (0xc0030140b0) Reply frame received for 3
I0129 14:41:57.036198       8 log.go:172] (0xc0030140b0) (0xc001ca9400) Create stream
I0129 14:41:57.036208       8 log.go:172] (0xc0030140b0) (0xc001ca9400) Stream added, broadcasting: 5
I0129 14:41:57.043091       8 log.go:172] (0xc0030140b0) Reply frame received for 5
I0129 14:41:58.174397       8 log.go:172] (0xc0030140b0) Data frame received for 3
I0129 14:41:58.174483       8 log.go:172] (0xc001f40140) (3) Data frame handling
I0129 14:41:58.174510       8 log.go:172] (0xc001f40140) (3) Data frame sent
I0129 14:41:58.305688       8 log.go:172] (0xc0030140b0) (0xc001f40140) Stream removed, broadcasting: 3
I0129 14:41:58.305951       8 log.go:172] (0xc0030140b0) Data frame received for 1
I0129 14:41:58.305971       8 log.go:172] (0xc00233aa00) (1) Data frame handling
I0129 14:41:58.306049       8 log.go:172] (0xc00233aa00) (1) Data frame sent
I0129 14:41:58.306133       8 log.go:172] (0xc0030140b0) (0xc00233aa00) Stream removed, broadcasting: 1
I0129 14:41:58.306243       8 log.go:172] (0xc0030140b0) (0xc001ca9400) Stream removed, broadcasting: 5
I0129 14:41:58.306539       8 log.go:172] (0xc0030140b0) (0xc00233aa00) Stream removed, broadcasting: 1
I0129 14:41:58.306719       8 log.go:172] (0xc0030140b0) (0xc001f40140) Stream removed, broadcasting: 3
I0129 14:41:58.306733       8 log.go:172] (0xc0030140b0) (0xc001ca9400) Stream removed, broadcasting: 5
I0129 14:41:58.306752       8 log.go:172] (0xc0030140b0) Go away received
Jan 29 14:41:58.306: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:41:58.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-8425" for this suite.
Jan 29 14:42:22.346: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:42:22.429: INFO: namespace pod-network-test-8425 deletion completed in 24.110431694s

• [SLOW TEST:59.275 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:42:22.430: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-26a10dca-0634-4ba1-8c8b-c9c1d16126ce
STEP: Creating configMap with name cm-test-opt-upd-0112e5b5-11ed-4900-a1b3-71566e04fe64
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-26a10dca-0634-4ba1-8c8b-c9c1d16126ce
STEP: Updating configmap cm-test-opt-upd-0112e5b5-11ed-4900-a1b3-71566e04fe64
STEP: Creating configMap with name cm-test-opt-create-505d0fe1-c17c-40d2-a0d9-02ad6cea36f0
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:42:37.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2288" for this suite.
Jan 29 14:42:59.369: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:42:59.509: INFO: namespace projected-2288 deletion completed in 22.172933354s

• [SLOW TEST:37.079 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:42:59.509: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-48f6
STEP: Creating a pod to test atomic-volume-subpath
Jan 29 14:42:59.740: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-48f6" in namespace "subpath-2151" to be "success or failure"
Jan 29 14:42:59.795: INFO: Pod "pod-subpath-test-configmap-48f6": Phase="Pending", Reason="", readiness=false. Elapsed: 54.796462ms
Jan 29 14:43:01.804: INFO: Pod "pod-subpath-test-configmap-48f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06368684s
Jan 29 14:43:03.813: INFO: Pod "pod-subpath-test-configmap-48f6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072540301s
Jan 29 14:43:05.829: INFO: Pod "pod-subpath-test-configmap-48f6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.088727739s
Jan 29 14:43:07.840: INFO: Pod "pod-subpath-test-configmap-48f6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.100255184s
Jan 29 14:43:09.860: INFO: Pod "pod-subpath-test-configmap-48f6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.1194673s
Jan 29 14:43:11.873: INFO: Pod "pod-subpath-test-configmap-48f6": Phase="Running", Reason="", readiness=true. Elapsed: 12.133085856s
Jan 29 14:43:13.888: INFO: Pod "pod-subpath-test-configmap-48f6": Phase="Running", Reason="", readiness=true. Elapsed: 14.147716763s
Jan 29 14:43:15.899: INFO: Pod "pod-subpath-test-configmap-48f6": Phase="Running", Reason="", readiness=true. Elapsed: 16.15849617s
Jan 29 14:43:17.908: INFO: Pod "pod-subpath-test-configmap-48f6": Phase="Running", Reason="", readiness=true. Elapsed: 18.168335251s
Jan 29 14:43:19.918: INFO: Pod "pod-subpath-test-configmap-48f6": Phase="Running", Reason="", readiness=true. Elapsed: 20.177715571s
Jan 29 14:43:21.933: INFO: Pod "pod-subpath-test-configmap-48f6": Phase="Running", Reason="", readiness=true. Elapsed: 22.192898215s
Jan 29 14:43:23.947: INFO: Pod "pod-subpath-test-configmap-48f6": Phase="Running", Reason="", readiness=true. Elapsed: 24.206654379s
Jan 29 14:43:25.957: INFO: Pod "pod-subpath-test-configmap-48f6": Phase="Running", Reason="", readiness=true. Elapsed: 26.216923623s
Jan 29 14:43:27.964: INFO: Pod "pod-subpath-test-configmap-48f6": Phase="Running", Reason="", readiness=true. Elapsed: 28.223418268s
Jan 29 14:43:30.398: INFO: Pod "pod-subpath-test-configmap-48f6": Phase="Running", Reason="", readiness=true. Elapsed: 30.657572288s
Jan 29 14:43:32.408: INFO: Pod "pod-subpath-test-configmap-48f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.668122977s
STEP: Saw pod success
Jan 29 14:43:32.408: INFO: Pod "pod-subpath-test-configmap-48f6" satisfied condition "success or failure"
Jan 29 14:43:32.416: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-48f6 container test-container-subpath-configmap-48f6: 
STEP: delete the pod
Jan 29 14:43:32.479: INFO: Waiting for pod pod-subpath-test-configmap-48f6 to disappear
Jan 29 14:43:32.484: INFO: Pod pod-subpath-test-configmap-48f6 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-48f6
Jan 29 14:43:32.485: INFO: Deleting pod "pod-subpath-test-configmap-48f6" in namespace "subpath-2151"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:43:32.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-2151" for this suite.
Jan 29 14:43:38.516: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:43:38.593: INFO: namespace subpath-2151 deletion completed in 6.099536719s

• [SLOW TEST:39.084 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:43:38.593: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-8e5410b0-9533-4a98-a0a3-958925b47c09
STEP: Creating a pod to test consume secrets
Jan 29 14:43:38.860: INFO: Waiting up to 5m0s for pod "pod-secrets-6b1c9ae6-0f4a-4aa8-8de2-5f5540705a99" in namespace "secrets-8680" to be "success or failure"
Jan 29 14:43:38.906: INFO: Pod "pod-secrets-6b1c9ae6-0f4a-4aa8-8de2-5f5540705a99": Phase="Pending", Reason="", readiness=false. Elapsed: 46.045272ms
Jan 29 14:43:40.913: INFO: Pod "pod-secrets-6b1c9ae6-0f4a-4aa8-8de2-5f5540705a99": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053211602s
Jan 29 14:43:42.924: INFO: Pod "pod-secrets-6b1c9ae6-0f4a-4aa8-8de2-5f5540705a99": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063731977s
Jan 29 14:43:44.931: INFO: Pod "pod-secrets-6b1c9ae6-0f4a-4aa8-8de2-5f5540705a99": Phase="Pending", Reason="", readiness=false. Elapsed: 6.07063637s
Jan 29 14:43:46.939: INFO: Pod "pod-secrets-6b1c9ae6-0f4a-4aa8-8de2-5f5540705a99": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.07848871s
STEP: Saw pod success
Jan 29 14:43:46.939: INFO: Pod "pod-secrets-6b1c9ae6-0f4a-4aa8-8de2-5f5540705a99" satisfied condition "success or failure"
Jan 29 14:43:46.943: INFO: Trying to get logs from node iruya-node pod pod-secrets-6b1c9ae6-0f4a-4aa8-8de2-5f5540705a99 container secret-volume-test: 
STEP: delete the pod
Jan 29 14:43:47.075: INFO: Waiting for pod pod-secrets-6b1c9ae6-0f4a-4aa8-8de2-5f5540705a99 to disappear
Jan 29 14:43:47.126: INFO: Pod pod-secrets-6b1c9ae6-0f4a-4aa8-8de2-5f5540705a99 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:43:47.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8680" for this suite.
Jan 29 14:43:53.164: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:43:53.302: INFO: namespace secrets-8680 deletion completed in 6.168047321s

• [SLOW TEST:14.709 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:43:53.303: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 29 14:43:53.385: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-264'
Jan 29 14:43:55.121: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 29 14:43:55.121: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562
Jan 29 14:43:57.217: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-264'
Jan 29 14:43:57.438: INFO: stderr: ""
Jan 29 14:43:57.439: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:43:57.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-264" for this suite.
Jan 29 14:44:03.487: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:44:03.650: INFO: namespace kubectl-264 deletion completed in 6.203794636s

• [SLOW TEST:10.347 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:44:03.651: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's command
Jan 29 14:44:03.753: INFO: Waiting up to 5m0s for pod "var-expansion-99e27b07-ee4b-4566-8677-e326221e19d8" in namespace "var-expansion-6440" to be "success or failure"
Jan 29 14:44:03.871: INFO: Pod "var-expansion-99e27b07-ee4b-4566-8677-e326221e19d8": Phase="Pending", Reason="", readiness=false. Elapsed: 117.372729ms
Jan 29 14:44:05.883: INFO: Pod "var-expansion-99e27b07-ee4b-4566-8677-e326221e19d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.129415128s
Jan 29 14:44:07.891: INFO: Pod "var-expansion-99e27b07-ee4b-4566-8677-e326221e19d8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.136967479s
Jan 29 14:44:09.898: INFO: Pod "var-expansion-99e27b07-ee4b-4566-8677-e326221e19d8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.144086281s
Jan 29 14:44:11.904: INFO: Pod "var-expansion-99e27b07-ee4b-4566-8677-e326221e19d8": Phase="Running", Reason="", readiness=true. Elapsed: 8.150448529s
Jan 29 14:44:13.916: INFO: Pod "var-expansion-99e27b07-ee4b-4566-8677-e326221e19d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.161957641s
STEP: Saw pod success
Jan 29 14:44:13.916: INFO: Pod "var-expansion-99e27b07-ee4b-4566-8677-e326221e19d8" satisfied condition "success or failure"
Jan 29 14:44:13.920: INFO: Trying to get logs from node iruya-node pod var-expansion-99e27b07-ee4b-4566-8677-e326221e19d8 container dapi-container: 
STEP: delete the pod
Jan 29 14:44:14.013: INFO: Waiting for pod var-expansion-99e27b07-ee4b-4566-8677-e326221e19d8 to disappear
Jan 29 14:44:14.023: INFO: Pod var-expansion-99e27b07-ee4b-4566-8677-e326221e19d8 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:44:14.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-6440" for this suite.
Jan 29 14:44:20.058: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:44:20.181: INFO: namespace var-expansion-6440 deletion completed in 6.15105129s

• [SLOW TEST:16.530 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:44:20.182: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 29 14:44:20.323: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-149'
Jan 29 14:44:20.548: INFO: stderr: ""
Jan 29 14:44:20.548: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Jan 29 14:44:30.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-149 -o json'
Jan 29 14:44:30.770: INFO: stderr: ""
Jan 29 14:44:30.770: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-01-29T14:44:20Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"kubectl-149\",\n        \"resourceVersion\": \"22326555\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-149/pods/e2e-test-nginx-pod\",\n        \"uid\": \"0968ca8d-b06a-432c-85bf-c50a61ff6620\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-nr66z\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"iruya-node\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-nr66z\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-nr66z\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-29T14:44:20Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-29T14:44:27Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-29T14:44:27Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-29T14:44:20Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://f4248dc62f199b16a1756eb31bb133c1246904a579142f9258d7594f07963e25\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-01-29T14:44:26Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.3.65\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.44.0.1\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-01-29T14:44:20Z\"\n    }\n}\n"
STEP: replace the image in the pod
Jan 29 14:44:30.771: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-149'
Jan 29 14:44:31.371: INFO: stderr: ""
Jan 29 14:44:31.371: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726
Jan 29 14:44:31.378: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-149'
Jan 29 14:44:39.205: INFO: stderr: ""
Jan 29 14:44:39.205: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:44:39.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-149" for this suite.
Jan 29 14:44:45.266: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:44:45.437: INFO: namespace kubectl-149 deletion completed in 6.204490683s

• [SLOW TEST:25.255 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:44:45.437: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 29 14:44:45.616: INFO: Waiting up to 5m0s for pod "downwardapi-volume-78c47966-400a-4280-b3f9-ec9c69496aef" in namespace "projected-7074" to be "success or failure"
Jan 29 14:44:45.660: INFO: Pod "downwardapi-volume-78c47966-400a-4280-b3f9-ec9c69496aef": Phase="Pending", Reason="", readiness=false. Elapsed: 43.628077ms
Jan 29 14:44:47.676: INFO: Pod "downwardapi-volume-78c47966-400a-4280-b3f9-ec9c69496aef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059595597s
Jan 29 14:44:49.683: INFO: Pod "downwardapi-volume-78c47966-400a-4280-b3f9-ec9c69496aef": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067017739s
Jan 29 14:44:51.692: INFO: Pod "downwardapi-volume-78c47966-400a-4280-b3f9-ec9c69496aef": Phase="Pending", Reason="", readiness=false. Elapsed: 6.07588817s
Jan 29 14:44:53.711: INFO: Pod "downwardapi-volume-78c47966-400a-4280-b3f9-ec9c69496aef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.094174211s
STEP: Saw pod success
Jan 29 14:44:53.711: INFO: Pod "downwardapi-volume-78c47966-400a-4280-b3f9-ec9c69496aef" satisfied condition "success or failure"
Jan 29 14:44:53.717: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-78c47966-400a-4280-b3f9-ec9c69496aef container client-container: 
STEP: delete the pod
Jan 29 14:44:53.885: INFO: Waiting for pod downwardapi-volume-78c47966-400a-4280-b3f9-ec9c69496aef to disappear
Jan 29 14:44:53.897: INFO: Pod downwardapi-volume-78c47966-400a-4280-b3f9-ec9c69496aef no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:44:53.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7074" for this suite.
Jan 29 14:44:59.926: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:45:00.033: INFO: namespace projected-7074 deletion completed in 6.129758253s

• [SLOW TEST:14.596 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:45:00.033: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Jan 29 14:45:00.145: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:45:15.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2573" for this suite.
Jan 29 14:45:21.349: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:45:21.481: INFO: namespace pods-2573 deletion completed in 6.16206186s

• [SLOW TEST:21.448 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:45:21.482: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:45:29.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-1696" for this suite.
Jan 29 14:46:21.719: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:46:21.905: INFO: namespace kubelet-test-1696 deletion completed in 52.207261165s

• [SLOW TEST:60.424 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:46:21.906: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-622e8dae-2ea0-411e-9440-6f54d969a8a3
STEP: Creating a pod to test consume secrets
Jan 29 14:46:22.057: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6e63dd49-8c88-4b27-9eab-8c17688d17fd" in namespace "projected-7277" to be "success or failure"
Jan 29 14:46:22.064: INFO: Pod "pod-projected-secrets-6e63dd49-8c88-4b27-9eab-8c17688d17fd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.329432ms
Jan 29 14:46:24.074: INFO: Pod "pod-projected-secrets-6e63dd49-8c88-4b27-9eab-8c17688d17fd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01591968s
Jan 29 14:46:26.145: INFO: Pod "pod-projected-secrets-6e63dd49-8c88-4b27-9eab-8c17688d17fd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.086918864s
Jan 29 14:46:28.152: INFO: Pod "pod-projected-secrets-6e63dd49-8c88-4b27-9eab-8c17688d17fd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.094037199s
Jan 29 14:46:30.160: INFO: Pod "pod-projected-secrets-6e63dd49-8c88-4b27-9eab-8c17688d17fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.101986192s
STEP: Saw pod success
Jan 29 14:46:30.160: INFO: Pod "pod-projected-secrets-6e63dd49-8c88-4b27-9eab-8c17688d17fd" satisfied condition "success or failure"
Jan 29 14:46:30.164: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-6e63dd49-8c88-4b27-9eab-8c17688d17fd container projected-secret-volume-test: 
STEP: delete the pod
Jan 29 14:46:30.262: INFO: Waiting for pod pod-projected-secrets-6e63dd49-8c88-4b27-9eab-8c17688d17fd to disappear
Jan 29 14:46:30.270: INFO: Pod pod-projected-secrets-6e63dd49-8c88-4b27-9eab-8c17688d17fd no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:46:30.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7277" for this suite.
Jan 29 14:46:36.310: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:46:36.441: INFO: namespace projected-7277 deletion completed in 6.161703266s

• [SLOW TEST:14.535 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:46:36.441: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-9708146e-c444-452c-98b1-ecb4c141c4e0
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:46:46.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8660" for this suite.
Jan 29 14:47:08.824: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:47:08.960: INFO: namespace configmap-8660 deletion completed in 22.160425851s

• [SLOW TEST:32.519 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:47:08.960: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-16a667aa-f856-4020-bcd6-045a2829579f in namespace container-probe-3857
Jan 29 14:47:19.140: INFO: Started pod busybox-16a667aa-f856-4020-bcd6-045a2829579f in namespace container-probe-3857
STEP: checking the pod's current state and verifying that restartCount is present
Jan 29 14:47:19.144: INFO: Initial restart count of pod busybox-16a667aa-f856-4020-bcd6-045a2829579f is 0
Jan 29 14:48:15.469: INFO: Restart count of pod container-probe-3857/busybox-16a667aa-f856-4020-bcd6-045a2829579f is now 1 (56.324446354s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:48:15.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3857" for this suite.
Jan 29 14:48:21.633: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:48:21.830: INFO: namespace container-probe-3857 deletion completed in 6.313356913s

• [SLOW TEST:72.870 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:48:21.831: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Jan 29 14:48:30.610: INFO: Successfully updated pod "labelsupdateb20b0d75-5c17-462d-ac55-7cb7ebbb8ecd"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:48:32.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2706" for this suite.
Jan 29 14:48:54.734: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:48:54.833: INFO: namespace downward-api-2706 deletion completed in 22.139901502s

• [SLOW TEST:33.003 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:48:54.834: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap that has name configmap-test-emptyKey-e33d4cf2-0750-4f2f-b236-07d860eb3f5c
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:48:54.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3139" for this suite.
Jan 29 14:49:00.920: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:49:01.038: INFO: namespace configmap-3139 deletion completed in 6.151222377s

• [SLOW TEST:6.204 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:49:01.038: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Jan 29 14:49:01.118: INFO: namespace kubectl-8444
Jan 29 14:49:01.119: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8444'
Jan 29 14:49:01.593: INFO: stderr: ""
Jan 29 14:49:01.593: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Jan 29 14:49:02.607: INFO: Selector matched 1 pods for map[app:redis]
Jan 29 14:49:02.607: INFO: Found 0 / 1
Jan 29 14:49:03.612: INFO: Selector matched 1 pods for map[app:redis]
Jan 29 14:49:03.612: INFO: Found 0 / 1
Jan 29 14:49:04.614: INFO: Selector matched 1 pods for map[app:redis]
Jan 29 14:49:04.614: INFO: Found 0 / 1
Jan 29 14:49:05.601: INFO: Selector matched 1 pods for map[app:redis]
Jan 29 14:49:05.601: INFO: Found 0 / 1
Jan 29 14:49:06.617: INFO: Selector matched 1 pods for map[app:redis]
Jan 29 14:49:06.617: INFO: Found 0 / 1
Jan 29 14:49:07.603: INFO: Selector matched 1 pods for map[app:redis]
Jan 29 14:49:07.603: INFO: Found 0 / 1
Jan 29 14:49:08.614: INFO: Selector matched 1 pods for map[app:redis]
Jan 29 14:49:08.614: INFO: Found 1 / 1
Jan 29 14:49:08.614: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan 29 14:49:08.619: INFO: Selector matched 1 pods for map[app:redis]
Jan 29 14:49:08.619: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan 29 14:49:08.619: INFO: wait on redis-master startup in kubectl-8444 
Jan 29 14:49:08.619: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-6vnp5 redis-master --namespace=kubectl-8444'
Jan 29 14:49:08.890: INFO: stderr: ""
Jan 29 14:49:08.890: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 29 Jan 14:49:07.665 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 29 Jan 14:49:07.665 # Server started, Redis version 3.2.12\n1:M 29 Jan 14:49:07.665 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 29 Jan 14:49:07.665 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Jan 29 14:49:08.891: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-8444'
Jan 29 14:49:09.138: INFO: stderr: ""
Jan 29 14:49:09.139: INFO: stdout: "service/rm2 exposed\n"
Jan 29 14:49:09.169: INFO: Service rm2 in namespace kubectl-8444 found.
STEP: exposing service
Jan 29 14:49:11.183: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-8444'
Jan 29 14:49:11.472: INFO: stderr: ""
Jan 29 14:49:11.472: INFO: stdout: "service/rm3 exposed\n"
Jan 29 14:49:11.478: INFO: Service rm3 in namespace kubectl-8444 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:49:13.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8444" for this suite.
Jan 29 14:49:35.565: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:49:35.662: INFO: namespace kubectl-8444 deletion completed in 22.163203035s

• [SLOW TEST:34.623 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:49:35.662: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service endpoint-test2 in namespace services-4074
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4074 to expose endpoints map[]
Jan 29 14:49:35.916: INFO: Get endpoints failed (19.09939ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Jan 29 14:49:36.925: INFO: successfully validated that service endpoint-test2 in namespace services-4074 exposes endpoints map[] (1.028238968s elapsed)
STEP: Creating pod pod1 in namespace services-4074
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4074 to expose endpoints map[pod1:[80]]
Jan 29 14:49:41.020: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.08212593s elapsed, will retry)
Jan 29 14:49:44.053: INFO: successfully validated that service endpoint-test2 in namespace services-4074 exposes endpoints map[pod1:[80]] (7.115076019s elapsed)
STEP: Creating pod pod2 in namespace services-4074
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4074 to expose endpoints map[pod1:[80] pod2:[80]]
Jan 29 14:49:49.207: INFO: Unexpected endpoints: found map[c04e9deb-937b-437c-8236-9f67eedfafff:[80]], expected map[pod1:[80] pod2:[80]] (5.148768383s elapsed, will retry)
Jan 29 14:49:51.242: INFO: successfully validated that service endpoint-test2 in namespace services-4074 exposes endpoints map[pod1:[80] pod2:[80]] (7.183727058s elapsed)
STEP: Deleting pod pod1 in namespace services-4074
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4074 to expose endpoints map[pod2:[80]]
Jan 29 14:49:52.373: INFO: successfully validated that service endpoint-test2 in namespace services-4074 exposes endpoints map[pod2:[80]] (1.117111177s elapsed)
STEP: Deleting pod pod2 in namespace services-4074
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4074 to expose endpoints map[]
Jan 29 14:49:52.394: INFO: successfully validated that service endpoint-test2 in namespace services-4074 exposes endpoints map[] (8.42794ms elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:49:52.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4074" for this suite.
Jan 29 14:50:14.610: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:50:14.707: INFO: namespace services-4074 deletion completed in 22.175414559s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:39.045 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:50:14.707: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan 29 14:50:22.987: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:50:23.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-3465" for this suite.
Jan 29 14:50:29.130: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:50:29.264: INFO: namespace container-runtime-3465 deletion completed in 6.199829467s

• [SLOW TEST:14.558 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:50:29.265: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-2266
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 29 14:50:29.390: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 29 14:51:07.824: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-2266 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 29 14:51:07.825: INFO: >>> kubeConfig: /root/.kube/config
I0129 14:51:07.928046       8 log.go:172] (0xc000b53600) (0xc000db5c20) Create stream
I0129 14:51:07.928172       8 log.go:172] (0xc000b53600) (0xc000db5c20) Stream added, broadcasting: 1
I0129 14:51:07.937035       8 log.go:172] (0xc000b53600) Reply frame received for 1
I0129 14:51:07.937149       8 log.go:172] (0xc000b53600) (0xc001e28640) Create stream
I0129 14:51:07.937164       8 log.go:172] (0xc000b53600) (0xc001e28640) Stream added, broadcasting: 3
I0129 14:51:07.939334       8 log.go:172] (0xc000b53600) Reply frame received for 3
I0129 14:51:07.939365       8 log.go:172] (0xc000b53600) (0xc000db5d60) Create stream
I0129 14:51:07.939371       8 log.go:172] (0xc000b53600) (0xc000db5d60) Stream added, broadcasting: 5
I0129 14:51:07.940608       8 log.go:172] (0xc000b53600) Reply frame received for 5
I0129 14:51:08.117704       8 log.go:172] (0xc000b53600) Data frame received for 3
I0129 14:51:08.117794       8 log.go:172] (0xc001e28640) (3) Data frame handling
I0129 14:51:08.117807       8 log.go:172] (0xc001e28640) (3) Data frame sent
I0129 14:51:08.247726       8 log.go:172] (0xc000b53600) (0xc001e28640) Stream removed, broadcasting: 3
I0129 14:51:08.247916       8 log.go:172] (0xc000b53600) Data frame received for 1
I0129 14:51:08.247949       8 log.go:172] (0xc000b53600) (0xc000db5d60) Stream removed, broadcasting: 5
I0129 14:51:08.247998       8 log.go:172] (0xc000db5c20) (1) Data frame handling
I0129 14:51:08.248013       8 log.go:172] (0xc000db5c20) (1) Data frame sent
I0129 14:51:08.248056       8 log.go:172] (0xc000b53600) (0xc000db5c20) Stream removed, broadcasting: 1
I0129 14:51:08.248297       8 log.go:172] (0xc000b53600) Go away received
I0129 14:51:08.248669       8 log.go:172] (0xc000b53600) (0xc000db5c20) Stream removed, broadcasting: 1
I0129 14:51:08.248694       8 log.go:172] (0xc000b53600) (0xc001e28640) Stream removed, broadcasting: 3
I0129 14:51:08.248704       8 log.go:172] (0xc000b53600) (0xc000db5d60) Stream removed, broadcasting: 5
Jan 29 14:51:08.248: INFO: Waiting for endpoints: map[]
Jan 29 14:51:08.257: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-2266 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 29 14:51:08.257: INFO: >>> kubeConfig: /root/.kube/config
I0129 14:51:08.305695       8 log.go:172] (0xc0012ae580) (0xc000ed90e0) Create stream
I0129 14:51:08.305738       8 log.go:172] (0xc0012ae580) (0xc000ed90e0) Stream added, broadcasting: 1
I0129 14:51:08.314264       8 log.go:172] (0xc0012ae580) Reply frame received for 1
I0129 14:51:08.314327       8 log.go:172] (0xc0012ae580) (0xc000216f00) Create stream
I0129 14:51:08.314352       8 log.go:172] (0xc0012ae580) (0xc000216f00) Stream added, broadcasting: 3
I0129 14:51:08.317481       8 log.go:172] (0xc0012ae580) Reply frame received for 3
I0129 14:51:08.317604       8 log.go:172] (0xc0012ae580) (0xc001d4ebe0) Create stream
I0129 14:51:08.317616       8 log.go:172] (0xc0012ae580) (0xc001d4ebe0) Stream added, broadcasting: 5
I0129 14:51:08.318955       8 log.go:172] (0xc0012ae580) Reply frame received for 5
I0129 14:51:08.414176       8 log.go:172] (0xc0012ae580) Data frame received for 3
I0129 14:51:08.414270       8 log.go:172] (0xc000216f00) (3) Data frame handling
I0129 14:51:08.414294       8 log.go:172] (0xc000216f00) (3) Data frame sent
I0129 14:51:08.643387       8 log.go:172] (0xc0012ae580) Data frame received for 1
I0129 14:51:08.643544       8 log.go:172] (0xc0012ae580) (0xc000216f00) Stream removed, broadcasting: 3
I0129 14:51:08.643684       8 log.go:172] (0xc000ed90e0) (1) Data frame handling
I0129 14:51:08.643738       8 log.go:172] (0xc000ed90e0) (1) Data frame sent
I0129 14:51:08.643748       8 log.go:172] (0xc0012ae580) (0xc000ed90e0) Stream removed, broadcasting: 1
I0129 14:51:08.643760       8 log.go:172] (0xc0012ae580) (0xc001d4ebe0) Stream removed, broadcasting: 5
I0129 14:51:08.644057       8 log.go:172] (0xc0012ae580) Go away received
I0129 14:51:08.644238       8 log.go:172] (0xc0012ae580) (0xc000ed90e0) Stream removed, broadcasting: 1
I0129 14:51:08.644297       8 log.go:172] (0xc0012ae580) (0xc000216f00) Stream removed, broadcasting: 3
I0129 14:51:08.644312       8 log.go:172] (0xc0012ae580) (0xc001d4ebe0) Stream removed, broadcasting: 5
Jan 29 14:51:08.644: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:51:08.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-2266" for this suite.
Jan 29 14:51:30.683: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:51:30.799: INFO: namespace pod-network-test-2266 deletion completed in 22.146018806s

• [SLOW TEST:61.535 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:51:30.800: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan 29 14:51:30.887: INFO: Waiting up to 5m0s for pod "pod-5b16f1ef-1ff8-4688-ab0e-1370124be7f3" in namespace "emptydir-1079" to be "success or failure"
Jan 29 14:51:30.895: INFO: Pod "pod-5b16f1ef-1ff8-4688-ab0e-1370124be7f3": Phase="Pending", Reason="", readiness=false. Elapsed: 7.230945ms
Jan 29 14:51:32.902: INFO: Pod "pod-5b16f1ef-1ff8-4688-ab0e-1370124be7f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014482415s
Jan 29 14:51:34.911: INFO: Pod "pod-5b16f1ef-1ff8-4688-ab0e-1370124be7f3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023382372s
Jan 29 14:51:36.923: INFO: Pod "pod-5b16f1ef-1ff8-4688-ab0e-1370124be7f3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.035909933s
Jan 29 14:51:38.933: INFO: Pod "pod-5b16f1ef-1ff8-4688-ab0e-1370124be7f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.045022941s
STEP: Saw pod success
Jan 29 14:51:38.933: INFO: Pod "pod-5b16f1ef-1ff8-4688-ab0e-1370124be7f3" satisfied condition "success or failure"
Jan 29 14:51:38.936: INFO: Trying to get logs from node iruya-node pod pod-5b16f1ef-1ff8-4688-ab0e-1370124be7f3 container test-container: 
STEP: delete the pod
Jan 29 14:51:39.030: INFO: Waiting for pod pod-5b16f1ef-1ff8-4688-ab0e-1370124be7f3 to disappear
Jan 29 14:51:39.042: INFO: Pod pod-5b16f1ef-1ff8-4688-ab0e-1370124be7f3 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:51:39.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1079" for this suite.
Jan 29 14:51:45.121: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:51:45.239: INFO: namespace emptydir-1079 deletion completed in 6.189607318s

• [SLOW TEST:14.439 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:51:45.240: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 29 14:51:45.320: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:51:46.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-2598" for this suite.
Jan 29 14:51:52.536: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:51:52.648: INFO: namespace custom-resource-definition-2598 deletion completed in 6.1927893s

• [SLOW TEST:7.409 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:51:52.650: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-e30b66bc-3824-4b80-9f4f-11ed7338fc6a
STEP: Creating a pod to test consume configMaps
Jan 29 14:51:52.772: INFO: Waiting up to 5m0s for pod "pod-configmaps-c19bfb4a-7a9a-4101-9627-c767b8e3d4b0" in namespace "configmap-8655" to be "success or failure"
Jan 29 14:51:52.790: INFO: Pod "pod-configmaps-c19bfb4a-7a9a-4101-9627-c767b8e3d4b0": Phase="Pending", Reason="", readiness=false. Elapsed: 18.105104ms
Jan 29 14:51:54.796: INFO: Pod "pod-configmaps-c19bfb4a-7a9a-4101-9627-c767b8e3d4b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023829681s
Jan 29 14:51:56.805: INFO: Pod "pod-configmaps-c19bfb4a-7a9a-4101-9627-c767b8e3d4b0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033100251s
Jan 29 14:51:58.811: INFO: Pod "pod-configmaps-c19bfb4a-7a9a-4101-9627-c767b8e3d4b0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039334383s
Jan 29 14:52:00.818: INFO: Pod "pod-configmaps-c19bfb4a-7a9a-4101-9627-c767b8e3d4b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.04570768s
STEP: Saw pod success
Jan 29 14:52:00.818: INFO: Pod "pod-configmaps-c19bfb4a-7a9a-4101-9627-c767b8e3d4b0" satisfied condition "success or failure"
Jan 29 14:52:00.823: INFO: Trying to get logs from node iruya-node pod pod-configmaps-c19bfb4a-7a9a-4101-9627-c767b8e3d4b0 container configmap-volume-test: 
STEP: delete the pod
Jan 29 14:52:00.865: INFO: Waiting for pod pod-configmaps-c19bfb4a-7a9a-4101-9627-c767b8e3d4b0 to disappear
Jan 29 14:52:00.876: INFO: Pod pod-configmaps-c19bfb4a-7a9a-4101-9627-c767b8e3d4b0 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:52:00.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8655" for this suite.
Jan 29 14:52:07.018: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:52:07.142: INFO: namespace configmap-8655 deletion completed in 6.241031919s

• [SLOW TEST:14.493 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:52:07.143: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-56f41add-8886-4017-a53c-98db55ba97f9
STEP: Creating a pod to test consume secrets
Jan 29 14:52:07.295: INFO: Waiting up to 5m0s for pod "pod-secrets-770eb0ef-e0bd-4ed7-b67e-57e740d5d7ed" in namespace "secrets-4874" to be "success or failure"
Jan 29 14:52:07.304: INFO: Pod "pod-secrets-770eb0ef-e0bd-4ed7-b67e-57e740d5d7ed": Phase="Pending", Reason="", readiness=false. Elapsed: 8.225398ms
Jan 29 14:52:09.331: INFO: Pod "pod-secrets-770eb0ef-e0bd-4ed7-b67e-57e740d5d7ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035189717s
Jan 29 14:52:11.342: INFO: Pod "pod-secrets-770eb0ef-e0bd-4ed7-b67e-57e740d5d7ed": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046163206s
Jan 29 14:52:13.350: INFO: Pod "pod-secrets-770eb0ef-e0bd-4ed7-b67e-57e740d5d7ed": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054453967s
Jan 29 14:52:15.362: INFO: Pod "pod-secrets-770eb0ef-e0bd-4ed7-b67e-57e740d5d7ed": Phase="Pending", Reason="", readiness=false. Elapsed: 8.065791324s
Jan 29 14:52:17.371: INFO: Pod "pod-secrets-770eb0ef-e0bd-4ed7-b67e-57e740d5d7ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.075201221s
STEP: Saw pod success
Jan 29 14:52:17.371: INFO: Pod "pod-secrets-770eb0ef-e0bd-4ed7-b67e-57e740d5d7ed" satisfied condition "success or failure"
Jan 29 14:52:17.375: INFO: Trying to get logs from node iruya-node pod pod-secrets-770eb0ef-e0bd-4ed7-b67e-57e740d5d7ed container secret-volume-test: 
STEP: delete the pod
Jan 29 14:52:17.554: INFO: Waiting for pod pod-secrets-770eb0ef-e0bd-4ed7-b67e-57e740d5d7ed to disappear
Jan 29 14:52:17.573: INFO: Pod pod-secrets-770eb0ef-e0bd-4ed7-b67e-57e740d5d7ed no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:52:17.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4874" for this suite.
Jan 29 14:52:23.624: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:52:23.719: INFO: namespace secrets-4874 deletion completed in 6.133950937s

• [SLOW TEST:16.577 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:52:23.720: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan 29 14:52:32.198: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:52:32.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-851" for this suite.
Jan 29 14:52:38.291: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:52:38.436: INFO: namespace container-runtime-851 deletion completed in 6.169502802s

• [SLOW TEST:14.717 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:52:38.437: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-1969.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-1969.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1969.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-1969.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-1969.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1969.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 29 14:52:50.768: INFO: Unable to read wheezy_udp@PodARecord from pod dns-1969/dns-test-0f965ede-3c7d-40a2-9904-91337dce7692: the server could not find the requested resource (get pods dns-test-0f965ede-3c7d-40a2-9904-91337dce7692)
Jan 29 14:52:50.772: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-1969/dns-test-0f965ede-3c7d-40a2-9904-91337dce7692: the server could not find the requested resource (get pods dns-test-0f965ede-3c7d-40a2-9904-91337dce7692)
Jan 29 14:52:50.777: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-1969.svc.cluster.local from pod dns-1969/dns-test-0f965ede-3c7d-40a2-9904-91337dce7692: the server could not find the requested resource (get pods dns-test-0f965ede-3c7d-40a2-9904-91337dce7692)
Jan 29 14:52:50.801: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-1969/dns-test-0f965ede-3c7d-40a2-9904-91337dce7692: the server could not find the requested resource (get pods dns-test-0f965ede-3c7d-40a2-9904-91337dce7692)
Jan 29 14:52:50.806: INFO: Unable to read jessie_udp@PodARecord from pod dns-1969/dns-test-0f965ede-3c7d-40a2-9904-91337dce7692: the server could not find the requested resource (get pods dns-test-0f965ede-3c7d-40a2-9904-91337dce7692)
Jan 29 14:52:50.810: INFO: Unable to read jessie_tcp@PodARecord from pod dns-1969/dns-test-0f965ede-3c7d-40a2-9904-91337dce7692: the server could not find the requested resource (get pods dns-test-0f965ede-3c7d-40a2-9904-91337dce7692)
Jan 29 14:52:50.810: INFO: Lookups using dns-1969/dns-test-0f965ede-3c7d-40a2-9904-91337dce7692 failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-1969.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Jan 29 14:52:55.895: INFO: DNS probes using dns-1969/dns-test-0f965ede-3c7d-40a2-9904-91337dce7692 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:52:55.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-1969" for this suite.
Jan 29 14:53:02.002: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:53:02.150: INFO: namespace dns-1969 deletion completed in 6.180587954s

• [SLOW TEST:23.713 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:53:02.150: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jan 29 14:53:02.301: INFO: Waiting up to 5m0s for pod "downward-api-7af3c221-298d-4ed9-8d6c-24d312d017d2" in namespace "downward-api-3751" to be "success or failure"
Jan 29 14:53:02.326: INFO: Pod "downward-api-7af3c221-298d-4ed9-8d6c-24d312d017d2": Phase="Pending", Reason="", readiness=false. Elapsed: 24.470934ms
Jan 29 14:53:04.335: INFO: Pod "downward-api-7af3c221-298d-4ed9-8d6c-24d312d017d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033468767s
Jan 29 14:53:06.344: INFO: Pod "downward-api-7af3c221-298d-4ed9-8d6c-24d312d017d2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042631948s
Jan 29 14:53:08.354: INFO: Pod "downward-api-7af3c221-298d-4ed9-8d6c-24d312d017d2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053015174s
Jan 29 14:53:10.368: INFO: Pod "downward-api-7af3c221-298d-4ed9-8d6c-24d312d017d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.066871165s
STEP: Saw pod success
Jan 29 14:53:10.368: INFO: Pod "downward-api-7af3c221-298d-4ed9-8d6c-24d312d017d2" satisfied condition "success or failure"
Jan 29 14:53:10.372: INFO: Trying to get logs from node iruya-node pod downward-api-7af3c221-298d-4ed9-8d6c-24d312d017d2 container dapi-container: 
STEP: delete the pod
Jan 29 14:53:10.450: INFO: Waiting for pod downward-api-7af3c221-298d-4ed9-8d6c-24d312d017d2 to disappear
Jan 29 14:53:10.457: INFO: Pod downward-api-7af3c221-298d-4ed9-8d6c-24d312d017d2 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:53:10.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3751" for this suite.
Jan 29 14:53:16.494: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:53:16.633: INFO: namespace downward-api-3751 deletion completed in 6.166425906s

• [SLOW TEST:14.483 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:53:16.633: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan 29 14:53:16.788: INFO: Number of nodes with available pods: 0
Jan 29 14:53:16.788: INFO: Node iruya-node is running more than one daemon pod
Jan 29 14:53:18.261: INFO: Number of nodes with available pods: 0
Jan 29 14:53:18.262: INFO: Node iruya-node is running more than one daemon pod
Jan 29 14:53:18.817: INFO: Number of nodes with available pods: 0
Jan 29 14:53:18.817: INFO: Node iruya-node is running more than one daemon pod
Jan 29 14:53:20.212: INFO: Number of nodes with available pods: 0
Jan 29 14:53:20.212: INFO: Node iruya-node is running more than one daemon pod
Jan 29 14:53:20.800: INFO: Number of nodes with available pods: 0
Jan 29 14:53:20.800: INFO: Node iruya-node is running more than one daemon pod
Jan 29 14:53:21.802: INFO: Number of nodes with available pods: 0
Jan 29 14:53:21.802: INFO: Node iruya-node is running more than one daemon pod
Jan 29 14:53:23.536: INFO: Number of nodes with available pods: 0
Jan 29 14:53:23.536: INFO: Node iruya-node is running more than one daemon pod
Jan 29 14:53:24.034: INFO: Number of nodes with available pods: 0
Jan 29 14:53:24.034: INFO: Node iruya-node is running more than one daemon pod
Jan 29 14:53:25.079: INFO: Number of nodes with available pods: 0
Jan 29 14:53:25.079: INFO: Node iruya-node is running more than one daemon pod
Jan 29 14:53:25.808: INFO: Number of nodes with available pods: 0
Jan 29 14:53:25.808: INFO: Node iruya-node is running more than one daemon pod
Jan 29 14:53:26.803: INFO: Number of nodes with available pods: 2
Jan 29 14:53:26.803: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Jan 29 14:53:26.912: INFO: Number of nodes with available pods: 1
Jan 29 14:53:26.912: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 29 14:53:28.244: INFO: Number of nodes with available pods: 1
Jan 29 14:53:28.244: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 29 14:53:29.194: INFO: Number of nodes with available pods: 1
Jan 29 14:53:29.194: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 29 14:53:29.932: INFO: Number of nodes with available pods: 1
Jan 29 14:53:29.932: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 29 14:53:30.944: INFO: Number of nodes with available pods: 1
Jan 29 14:53:30.944: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 29 14:53:31.977: INFO: Number of nodes with available pods: 1
Jan 29 14:53:31.977: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 29 14:53:32.931: INFO: Number of nodes with available pods: 1
Jan 29 14:53:32.931: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 29 14:53:33.940: INFO: Number of nodes with available pods: 1
Jan 29 14:53:33.940: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 29 14:53:34.930: INFO: Number of nodes with available pods: 2
Jan 29 14:53:34.930: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1013, will wait for the garbage collector to delete the pods
Jan 29 14:53:35.002: INFO: Deleting DaemonSet.extensions daemon-set took: 11.147286ms
Jan 29 14:53:35.303: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.958542ms
Jan 29 14:53:42.110: INFO: Number of nodes with available pods: 0
Jan 29 14:53:42.110: INFO: Number of running nodes: 0, number of available pods: 0
Jan 29 14:53:42.113: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1013/daemonsets","resourceVersion":"22327917"},"items":null}

Jan 29 14:53:42.115: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1013/pods","resourceVersion":"22327917"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:53:42.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-1013" for this suite.
Jan 29 14:53:48.207: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:53:48.480: INFO: namespace daemonsets-1013 deletion completed in 6.347317496s

• [SLOW TEST:31.847 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:53:48.481: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-4eade063-51d6-41b7-9444-5f57686a3b21
STEP: Creating a pod to test consume secrets
Jan 29 14:53:48.659: INFO: Waiting up to 5m0s for pod "pod-secrets-f9c4e96d-2468-4ea4-8c61-1721f1d5ded8" in namespace "secrets-3914" to be "success or failure"
Jan 29 14:53:48.665: INFO: Pod "pod-secrets-f9c4e96d-2468-4ea4-8c61-1721f1d5ded8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04154ms
Jan 29 14:53:50.674: INFO: Pod "pod-secrets-f9c4e96d-2468-4ea4-8c61-1721f1d5ded8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015145036s
Jan 29 14:53:52.682: INFO: Pod "pod-secrets-f9c4e96d-2468-4ea4-8c61-1721f1d5ded8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022816422s
Jan 29 14:53:54.700: INFO: Pod "pod-secrets-f9c4e96d-2468-4ea4-8c61-1721f1d5ded8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041594066s
Jan 29 14:53:56.714: INFO: Pod "pod-secrets-f9c4e96d-2468-4ea4-8c61-1721f1d5ded8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.055597089s
STEP: Saw pod success
Jan 29 14:53:56.714: INFO: Pod "pod-secrets-f9c4e96d-2468-4ea4-8c61-1721f1d5ded8" satisfied condition "success or failure"
Jan 29 14:53:56.720: INFO: Trying to get logs from node iruya-node pod pod-secrets-f9c4e96d-2468-4ea4-8c61-1721f1d5ded8 container secret-volume-test: 
STEP: delete the pod
Jan 29 14:53:56.793: INFO: Waiting for pod pod-secrets-f9c4e96d-2468-4ea4-8c61-1721f1d5ded8 to disappear
Jan 29 14:53:56.800: INFO: Pod pod-secrets-f9c4e96d-2468-4ea4-8c61-1721f1d5ded8 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:53:56.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3914" for this suite.
Jan 29 14:54:02.877: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:54:03.033: INFO: namespace secrets-3914 deletion completed in 6.224548579s

• [SLOW TEST:14.552 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:54:03.033: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:54:03.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-1003" for this suite.
Jan 29 14:54:09.286: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:54:09.392: INFO: namespace kubelet-test-1003 deletion completed in 6.185663357s

• [SLOW TEST:6.359 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:54:09.392: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 29 14:54:09.453: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Jan 29 14:54:12.071: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:54:12.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-7468" for this suite.
Jan 29 14:54:22.206: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:54:22.372: INFO: namespace replication-controller-7468 deletion completed in 10.254410059s

• [SLOW TEST:12.980 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period 
  should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:54:22.373: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47
[It] should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Jan 29 14:54:30.547: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Jan 29 14:54:50.710: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:54:50.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4510" for this suite.
Jan 29 14:54:56.753: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:54:56.901: INFO: namespace pods-4510 deletion completed in 6.178698057s

• [SLOW TEST:34.529 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:54:56.902: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test hostPath mode
Jan 29 14:54:56.994: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-2040" to be "success or failure"
Jan 29 14:54:57.002: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 7.370458ms
Jan 29 14:54:59.040: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045694643s
Jan 29 14:55:01.058: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063890891s
Jan 29 14:55:03.063: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068414456s
Jan 29 14:55:05.073: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.079019573s
Jan 29 14:55:07.080: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.085305907s
STEP: Saw pod success
Jan 29 14:55:07.080: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Jan 29 14:55:07.082: INFO: Trying to get logs from node iruya-node pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Jan 29 14:55:07.143: INFO: Waiting for pod pod-host-path-test to disappear
Jan 29 14:55:07.154: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:55:07.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-2040" for this suite.
Jan 29 14:55:13.196: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:55:13.327: INFO: namespace hostpath-2040 deletion completed in 6.168065641s

• [SLOW TEST:16.426 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:55:13.328: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 29 14:55:13.420: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:55:21.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4403" for this suite.
Jan 29 14:56:07.817: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:56:07.962: INFO: namespace pods-4403 deletion completed in 46.171452494s

• [SLOW TEST:54.635 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:56:07.962: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Jan 29 14:56:16.655: INFO: Successfully updated pod "annotationupdatef2ac7256-8fc0-4364-ac50-ae7c28204482"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:56:18.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-982" for this suite.
Jan 29 14:56:40.818: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:56:40.930: INFO: namespace downward-api-982 deletion completed in 22.177562003s

• [SLOW TEST:32.968 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:56:40.930: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan 29 14:56:41.013: INFO: Waiting up to 5m0s for pod "pod-08eee0b4-8385-4f00-9319-54479127add5" in namespace "emptydir-9059" to be "success or failure"
Jan 29 14:56:41.114: INFO: Pod "pod-08eee0b4-8385-4f00-9319-54479127add5": Phase="Pending", Reason="", readiness=false. Elapsed: 100.658181ms
Jan 29 14:56:43.128: INFO: Pod "pod-08eee0b4-8385-4f00-9319-54479127add5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.114691539s
Jan 29 14:56:45.139: INFO: Pod "pod-08eee0b4-8385-4f00-9319-54479127add5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.125397646s
Jan 29 14:56:47.145: INFO: Pod "pod-08eee0b4-8385-4f00-9319-54479127add5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.131815438s
Jan 29 14:56:49.152: INFO: Pod "pod-08eee0b4-8385-4f00-9319-54479127add5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.139323576s
STEP: Saw pod success
Jan 29 14:56:49.153: INFO: Pod "pod-08eee0b4-8385-4f00-9319-54479127add5" satisfied condition "success or failure"
Jan 29 14:56:49.155: INFO: Trying to get logs from node iruya-node pod pod-08eee0b4-8385-4f00-9319-54479127add5 container test-container: 
STEP: delete the pod
Jan 29 14:56:49.247: INFO: Waiting for pod pod-08eee0b4-8385-4f00-9319-54479127add5 to disappear
Jan 29 14:56:49.256: INFO: Pod pod-08eee0b4-8385-4f00-9319-54479127add5 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:56:49.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9059" for this suite.
Jan 29 14:56:55.322: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:56:55.443: INFO: namespace emptydir-9059 deletion completed in 6.181588318s

• [SLOW TEST:14.513 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:56:55.443: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-613047c1-ba26-454b-b871-5401ea53456d
STEP: Creating a pod to test consume configMaps
Jan 29 14:56:55.578: INFO: Waiting up to 5m0s for pod "pod-configmaps-0bfe42b0-b2b2-438a-9afd-ee3363ec3350" in namespace "configmap-7684" to be "success or failure"
Jan 29 14:56:55.593: INFO: Pod "pod-configmaps-0bfe42b0-b2b2-438a-9afd-ee3363ec3350": Phase="Pending", Reason="", readiness=false. Elapsed: 15.245434ms
Jan 29 14:56:57.614: INFO: Pod "pod-configmaps-0bfe42b0-b2b2-438a-9afd-ee3363ec3350": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035760434s
Jan 29 14:56:59.620: INFO: Pod "pod-configmaps-0bfe42b0-b2b2-438a-9afd-ee3363ec3350": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041875717s
Jan 29 14:57:01.627: INFO: Pod "pod-configmaps-0bfe42b0-b2b2-438a-9afd-ee3363ec3350": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04843137s
Jan 29 14:57:03.640: INFO: Pod "pod-configmaps-0bfe42b0-b2b2-438a-9afd-ee3363ec3350": Phase="Pending", Reason="", readiness=false. Elapsed: 8.062176799s
Jan 29 14:57:05.682: INFO: Pod "pod-configmaps-0bfe42b0-b2b2-438a-9afd-ee3363ec3350": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.104299967s
STEP: Saw pod success
Jan 29 14:57:05.683: INFO: Pod "pod-configmaps-0bfe42b0-b2b2-438a-9afd-ee3363ec3350" satisfied condition "success or failure"
Jan 29 14:57:05.688: INFO: Trying to get logs from node iruya-node pod pod-configmaps-0bfe42b0-b2b2-438a-9afd-ee3363ec3350 container configmap-volume-test: 
STEP: delete the pod
Jan 29 14:57:05.759: INFO: Waiting for pod pod-configmaps-0bfe42b0-b2b2-438a-9afd-ee3363ec3350 to disappear
Jan 29 14:57:05.768: INFO: Pod pod-configmaps-0bfe42b0-b2b2-438a-9afd-ee3363ec3350 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:57:05.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7684" for this suite.
Jan 29 14:57:11.912: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:57:12.076: INFO: namespace configmap-7684 deletion completed in 6.224668187s

• [SLOW TEST:16.632 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:57:12.076: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Jan 29 14:57:12.167: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Registering the sample API server.
Jan 29 14:57:13.086: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715906632, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715906632, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715906632, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715906632, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 29 14:57:15.102: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715906632, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715906632, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715906632, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715906632, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 29 14:57:17.098: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715906632, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715906632, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715906632, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715906632, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 29 14:57:19.095: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715906632, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715906632, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715906632, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715906632, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 29 14:57:21.097: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715906632, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715906632, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715906632, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715906632, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 29 14:57:23.096: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715906632, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715906632, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715906632, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715906632, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 29 14:57:28.934: INFO: Waited 3.826562123s for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:57:29.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-4382" for this suite.
Jan 29 14:57:35.746: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:57:35.918: INFO: namespace aggregator-4382 deletion completed in 6.253757729s

• [SLOW TEST:23.842 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:57:35.919: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-3148a617-cafc-4806-8363-e156af905a12
STEP: Creating a pod to test consume configMaps
Jan 29 14:57:36.084: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9d03a600-ff84-4795-92a9-7bc611a5758a" in namespace "projected-5706" to be "success or failure"
Jan 29 14:57:36.132: INFO: Pod "pod-projected-configmaps-9d03a600-ff84-4795-92a9-7bc611a5758a": Phase="Pending", Reason="", readiness=false. Elapsed: 48.675616ms
Jan 29 14:57:38.142: INFO: Pod "pod-projected-configmaps-9d03a600-ff84-4795-92a9-7bc611a5758a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057979765s
Jan 29 14:57:40.150: INFO: Pod "pod-projected-configmaps-9d03a600-ff84-4795-92a9-7bc611a5758a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065974303s
Jan 29 14:57:42.162: INFO: Pod "pod-projected-configmaps-9d03a600-ff84-4795-92a9-7bc611a5758a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.078734566s
Jan 29 14:57:44.188: INFO: Pod "pod-projected-configmaps-9d03a600-ff84-4795-92a9-7bc611a5758a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.104335566s
STEP: Saw pod success
Jan 29 14:57:44.188: INFO: Pod "pod-projected-configmaps-9d03a600-ff84-4795-92a9-7bc611a5758a" satisfied condition "success or failure"
Jan 29 14:57:44.195: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-9d03a600-ff84-4795-92a9-7bc611a5758a container projected-configmap-volume-test: 
STEP: delete the pod
Jan 29 14:57:44.505: INFO: Waiting for pod pod-projected-configmaps-9d03a600-ff84-4795-92a9-7bc611a5758a to disappear
Jan 29 14:57:44.517: INFO: Pod pod-projected-configmaps-9d03a600-ff84-4795-92a9-7bc611a5758a no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:57:44.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5706" for this suite.
Jan 29 14:57:50.580: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:57:50.706: INFO: namespace projected-5706 deletion completed in 6.184760334s

• [SLOW TEST:14.787 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:57:50.706: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:58:40.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-548" for this suite.
Jan 29 14:58:46.707: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:58:46.852: INFO: namespace container-runtime-548 deletion completed in 6.167143363s

• [SLOW TEST:56.145 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:58:46.852: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-6a5029da-174f-4072-a76f-7adfd9836d93
STEP: Creating secret with name s-test-opt-upd-d9e349c4-2b50-4bbb-b8d7-3419f97b735f
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-6a5029da-174f-4072-a76f-7adfd9836d93
STEP: Updating secret s-test-opt-upd-d9e349c4-2b50-4bbb-b8d7-3419f97b735f
STEP: Creating secret with name s-test-opt-create-c708a9a9-c02c-4cb9-aeee-a4f9697fee79
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:59:01.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-306" for this suite.
Jan 29 14:59:23.484: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:59:23.651: INFO: namespace projected-306 deletion completed in 22.199062553s

• [SLOW TEST:36.799 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:59:23.651: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 29 14:59:23.781: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7746'
Jan 29 14:59:26.169: INFO: stderr: ""
Jan 29 14:59:26.169: INFO: stdout: "replicationcontroller/redis-master created\n"
Jan 29 14:59:26.169: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7746'
Jan 29 14:59:27.462: INFO: stderr: ""
Jan 29 14:59:27.462: INFO: stdout: "service/redis-master created\n"
STEP: Waiting for Redis master to start.
Jan 29 14:59:28.474: INFO: Selector matched 1 pods for map[app:redis]
Jan 29 14:59:28.474: INFO: Found 0 / 1
Jan 29 14:59:29.478: INFO: Selector matched 1 pods for map[app:redis]
Jan 29 14:59:29.478: INFO: Found 0 / 1
Jan 29 14:59:30.542: INFO: Selector matched 1 pods for map[app:redis]
Jan 29 14:59:30.542: INFO: Found 0 / 1
Jan 29 14:59:31.473: INFO: Selector matched 1 pods for map[app:redis]
Jan 29 14:59:31.473: INFO: Found 0 / 1
Jan 29 14:59:32.477: INFO: Selector matched 1 pods for map[app:redis]
Jan 29 14:59:32.477: INFO: Found 0 / 1
Jan 29 14:59:33.470: INFO: Selector matched 1 pods for map[app:redis]
Jan 29 14:59:33.470: INFO: Found 1 / 1
Jan 29 14:59:33.470: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan 29 14:59:33.473: INFO: Selector matched 1 pods for map[app:redis]
Jan 29 14:59:33.473: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan 29 14:59:33.473: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-zm4tj --namespace=kubectl-7746'
Jan 29 14:59:33.615: INFO: stderr: ""
Jan 29 14:59:33.615: INFO: stdout: "Name:           redis-master-zm4tj\nNamespace:      kubectl-7746\nPriority:       0\nNode:           iruya-node/10.96.3.65\nStart Time:     Wed, 29 Jan 2020 14:59:26 +0000\nLabels:         app=redis\n                role=master\nAnnotations:    \nStatus:         Running\nIP:             10.44.0.1\nControlled By:  ReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:   docker://a0c1a079d78330d1f806981b4203690115aa55a45708c99eacb093446bd28b02\n    Image:          gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Image ID:       docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Wed, 29 Jan 2020 14:59:32 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-fr6ss (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-fr6ss:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-fr6ss\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                 Message\n  ----    ------     ----  ----                 -------\n  Normal  Scheduled  7s    default-scheduler    Successfully assigned kubectl-7746/redis-master-zm4tj to iruya-node\n  Normal  Pulled     3s    kubelet, iruya-node  Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n  Normal  Created    1s    kubelet, iruya-node  Created container redis-master\n  Normal  Started    1s    kubelet, iruya-node  Started container redis-master\n"
Jan 29 14:59:33.615: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-7746'
Jan 29 14:59:33.838: INFO: stderr: ""
Jan 29 14:59:33.838: INFO: stdout: "Name:         redis-master\nNamespace:    kubectl-7746\nSelector:     app=redis,role=master\nLabels:       app=redis\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=redis\n           role=master\n  Containers:\n   redis-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  7s    replication-controller  Created pod: redis-master-zm4tj\n"
Jan 29 14:59:33.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-7746'
Jan 29 14:59:33.996: INFO: stderr: ""
Jan 29 14:59:33.997: INFO: stdout: "Name:              redis-master\nNamespace:         kubectl-7746\nLabels:            app=redis\n                   role=master\nAnnotations:       \nSelector:          app=redis,role=master\nType:              ClusterIP\nIP:                10.96.248.164\nPort:                6379/TCP\nTargetPort:        redis-server/TCP\nEndpoints:         10.44.0.1:6379\nSession Affinity:  None\nEvents:            \n"
Jan 29 14:59:34.006: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-node'
Jan 29 14:59:34.255: INFO: stderr: ""
Jan 29 14:59:34.255: INFO: stdout: "Name:               iruya-node\nRoles:              \nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=iruya-node\n                    kubernetes.io/os=linux\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sun, 04 Aug 2019 09:01:39 +0000\nTaints:             \nUnschedulable:      false\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Sat, 12 Oct 2019 11:56:49 +0000   Sat, 12 Oct 2019 11:56:49 +0000   WeaveIsUp                    Weave pod has set this\n  MemoryPressure       False   Wed, 29 Jan 2020 14:59:06 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Wed, 29 Jan 2020 14:59:06 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Wed, 29 Jan 2020 14:59:06 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Wed, 29 Jan 2020 14:59:06 +0000   Sun, 04 Aug 2019 09:02:19 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:  10.96.3.65\n  Hostname:    iruya-node\nCapacity:\n cpu:                4\n ephemeral-storage:  20145724Ki\n hugepages-2Mi:      0\n memory:             4039076Ki\n pods:               110\nAllocatable:\n cpu:                4\n ephemeral-storage:  18566299208\n hugepages-2Mi:      0\n memory:             3936676Ki\n pods:               110\nSystem Info:\n Machine ID:                 f573dcf04d6f4a87856a35d266a2fa7a\n System UUID:                F573DCF0-4D6F-4A87-856A-35D266A2FA7A\n Boot ID:                    8baf4beb-8391-43e6-b17b-b1e184b5370a\n Kernel Version:             4.15.0-52-generic\n OS Image:                   Ubuntu 18.04.2 LTS\n Operating System:           linux\n Architecture:               amd64\n Container Runtime Version:  docker://18.9.7\n Kubelet Version:            v1.15.1\n Kube-Proxy Version:         v1.15.1\nPodCIDR:                     10.96.1.0/24\nNon-terminated Pods:         (3 in total)\n  Namespace                  Name                  CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                  ----                  ------------  ----------  ---------------  -------------  ---\n  kube-system                kube-proxy-976zl      0 (0%)        0 (0%)      0 (0%)           0 (0%)         178d\n  kube-system                weave-net-rlp57       20m (0%)      0 (0%)      0 (0%)           0 (0%)         109d\n  kubectl-7746               redis-master-zm4tj    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests  Limits\n  --------           --------  ------\n  cpu                20m (0%)  0 (0%)\n  memory             0 (0%)    0 (0%)\n  ephemeral-storage  0 (0%)    0 (0%)\nEvents:              \n"
Jan 29 14:59:34.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-7746'
Jan 29 14:59:34.372: INFO: stderr: ""
Jan 29 14:59:34.372: INFO: stdout: "Name:         kubectl-7746\nLabels:       e2e-framework=kubectl\n              e2e-run=6cfa43eb-0d70-4f3f-b409-bbfece2c1da4\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo resource limits.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 14:59:34.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7746" for this suite.
Jan 29 14:59:56.416: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 14:59:56.538: INFO: namespace kubectl-7746 deletion completed in 22.158244767s

• [SLOW TEST:32.887 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 14:59:56.538: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-1906
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Jan 29 14:59:56.679: INFO: Found 0 stateful pods, waiting for 3
Jan 29 15:00:06.691: INFO: Found 2 stateful pods, waiting for 3
Jan 29 15:00:16.690: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 29 15:00:16.690: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 29 15:00:16.690: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 29 15:00:26.687: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 29 15:00:26.687: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 29 15:00:26.687: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Jan 29 15:00:26.723: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Jan 29 15:00:36.798: INFO: Updating stateful set ss2
Jan 29 15:00:36.814: INFO: Waiting for Pod statefulset-1906/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Jan 29 15:00:47.113: INFO: Found 2 stateful pods, waiting for 3
Jan 29 15:00:57.131: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 29 15:00:57.131: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 29 15:00:57.131: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 29 15:01:07.127: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 29 15:01:07.127: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 29 15:01:07.127: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Jan 29 15:01:07.166: INFO: Updating stateful set ss2
Jan 29 15:01:07.220: INFO: Waiting for Pod statefulset-1906/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 29 15:01:17.235: INFO: Waiting for Pod statefulset-1906/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 29 15:01:27.272: INFO: Updating stateful set ss2
Jan 29 15:01:27.327: INFO: Waiting for StatefulSet statefulset-1906/ss2 to complete update
Jan 29 15:01:27.327: INFO: Waiting for Pod statefulset-1906/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 29 15:01:37.340: INFO: Waiting for StatefulSet statefulset-1906/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Jan 29 15:01:47.348: INFO: Deleting all statefulset in ns statefulset-1906
Jan 29 15:01:47.352: INFO: Scaling statefulset ss2 to 0
Jan 29 15:02:07.436: INFO: Waiting for statefulset status.replicas updated to 0
Jan 29 15:02:07.440: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 15:02:07.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-1906" for this suite.
Jan 29 15:02:13.532: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 15:02:13.652: INFO: namespace statefulset-1906 deletion completed in 6.144625138s

• [SLOW TEST:137.114 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 15:02:13.652: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Jan 29 15:02:21.915: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-c2abc37c-9760-4c1e-baec-b03016e64234,GenerateName:,Namespace:events-1213,SelfLink:/api/v1/namespaces/events-1213/pods/send-events-c2abc37c-9760-4c1e-baec-b03016e64234,UID:d70ee0b6-fb74-4798-a2d0-1c2e4f15d2b5,ResourceVersion:22329399,Generation:0,CreationTimestamp:2020-01-29 15:02:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 812960636,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-j9v5g {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-j9v5g,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-j9v5g true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00123f750} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00123f770}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 15:02:13 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 15:02:21 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 15:02:21 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 15:02:13 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-01-29 15:02:13 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-01-29 15:02:21 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://6a634d978521a3c519d77c8433df9f4250cd112da144f32009ec1dd3fc5988d9}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Jan 29 15:02:23.932: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Jan 29 15:02:25.941: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 15:02:25.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-1213" for this suite.
Jan 29 15:03:06.882: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 15:03:07.025: INFO: namespace events-1213 deletion completed in 40.198333266s

• [SLOW TEST:53.373 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 15:03:07.025: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 29 15:03:07.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-1384'
Jan 29 15:03:07.269: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 29 15:03:07.269: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617
Jan 29 15:03:07.348: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-1384'
Jan 29 15:03:07.554: INFO: stderr: ""
Jan 29 15:03:07.554: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 15:03:07.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1384" for this suite.
Jan 29 15:03:13.593: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 15:03:13.763: INFO: namespace kubectl-1384 deletion completed in 6.204224526s

• [SLOW TEST:6.738 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 15:03:13.764: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Jan 29 15:03:13.846: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 15:03:28.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-4815" for this suite.
Jan 29 15:03:34.175: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 15:03:34.306: INFO: namespace init-container-4815 deletion completed in 6.151189957s

• [SLOW TEST:20.542 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 15:03:34.307: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan 29 15:03:43.044: INFO: Successfully updated pod "pod-update-1b39eb7e-ea9a-4ac9-8be2-3f5e5603b47d"
STEP: verifying the updated pod is in kubernetes
Jan 29 15:03:43.063: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 15:03:43.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5316" for this suite.
Jan 29 15:04:05.095: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 15:04:05.193: INFO: namespace pods-5316 deletion completed in 22.122120042s

• [SLOW TEST:30.887 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 15:04:05.194: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-9188
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 29 15:04:05.291: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 29 15:04:43.513: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-9188 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 29 15:04:43.513: INFO: >>> kubeConfig: /root/.kube/config
I0129 15:04:43.611756       8 log.go:172] (0xc000497c30) (0xc003372820) Create stream
I0129 15:04:43.611844       8 log.go:172] (0xc000497c30) (0xc003372820) Stream added, broadcasting: 1
I0129 15:04:43.619857       8 log.go:172] (0xc000497c30) Reply frame received for 1
I0129 15:04:43.619936       8 log.go:172] (0xc000497c30) (0xc0002179a0) Create stream
I0129 15:04:43.619953       8 log.go:172] (0xc000497c30) (0xc0002179a0) Stream added, broadcasting: 3
I0129 15:04:43.622177       8 log.go:172] (0xc000497c30) Reply frame received for 3
I0129 15:04:43.622220       8 log.go:172] (0xc000497c30) (0xc0013641e0) Create stream
I0129 15:04:43.622233       8 log.go:172] (0xc000497c30) (0xc0013641e0) Stream added, broadcasting: 5
I0129 15:04:43.626089       8 log.go:172] (0xc000497c30) Reply frame received for 5
I0129 15:04:43.878169       8 log.go:172] (0xc000497c30) Data frame received for 3
I0129 15:04:43.878326       8 log.go:172] (0xc0002179a0) (3) Data frame handling
I0129 15:04:43.878408       8 log.go:172] (0xc0002179a0) (3) Data frame sent
I0129 15:04:44.166949       8 log.go:172] (0xc000497c30) (0xc0013641e0) Stream removed, broadcasting: 5
I0129 15:04:44.167200       8 log.go:172] (0xc000497c30) Data frame received for 1
I0129 15:04:44.167236       8 log.go:172] (0xc000497c30) (0xc0002179a0) Stream removed, broadcasting: 3
I0129 15:04:44.167289       8 log.go:172] (0xc003372820) (1) Data frame handling
I0129 15:04:44.167325       8 log.go:172] (0xc003372820) (1) Data frame sent
I0129 15:04:44.167336       8 log.go:172] (0xc000497c30) (0xc003372820) Stream removed, broadcasting: 1
I0129 15:04:44.167355       8 log.go:172] (0xc000497c30) Go away received
I0129 15:04:44.167614       8 log.go:172] (0xc000497c30) (0xc003372820) Stream removed, broadcasting: 1
I0129 15:04:44.167632       8 log.go:172] (0xc000497c30) (0xc0002179a0) Stream removed, broadcasting: 3
I0129 15:04:44.167644       8 log.go:172] (0xc000497c30) (0xc0013641e0) Stream removed, broadcasting: 5
Jan 29 15:04:44.167: INFO: Found all expected endpoints: [netserver-0]
Jan 29 15:04:44.449: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-9188 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 29 15:04:44.449: INFO: >>> kubeConfig: /root/.kube/config
I0129 15:04:44.517737       8 log.go:172] (0xc0025a8580) (0xc000db55e0) Create stream
I0129 15:04:44.517868       8 log.go:172] (0xc0025a8580) (0xc000db55e0) Stream added, broadcasting: 1
I0129 15:04:44.527972       8 log.go:172] (0xc0025a8580) Reply frame received for 1
I0129 15:04:44.528020       8 log.go:172] (0xc0025a8580) (0xc001364460) Create stream
I0129 15:04:44.528033       8 log.go:172] (0xc0025a8580) (0xc001364460) Stream added, broadcasting: 3
I0129 15:04:44.529350       8 log.go:172] (0xc0025a8580) Reply frame received for 3
I0129 15:04:44.529373       8 log.go:172] (0xc0025a8580) (0xc0013645a0) Create stream
I0129 15:04:44.529380       8 log.go:172] (0xc0025a8580) (0xc0013645a0) Stream added, broadcasting: 5
I0129 15:04:44.530883       8 log.go:172] (0xc0025a8580) Reply frame received for 5
I0129 15:04:44.693003       8 log.go:172] (0xc0025a8580) Data frame received for 3
I0129 15:04:44.693096       8 log.go:172] (0xc001364460) (3) Data frame handling
I0129 15:04:44.693123       8 log.go:172] (0xc001364460) (3) Data frame sent
I0129 15:04:44.816948       8 log.go:172] (0xc0025a8580) Data frame received for 1
I0129 15:04:44.817084       8 log.go:172] (0xc0025a8580) (0xc001364460) Stream removed, broadcasting: 3
I0129 15:04:44.817136       8 log.go:172] (0xc000db55e0) (1) Data frame handling
I0129 15:04:44.817152       8 log.go:172] (0xc000db55e0) (1) Data frame sent
I0129 15:04:44.817178       8 log.go:172] (0xc0025a8580) (0xc0013645a0) Stream removed, broadcasting: 5
I0129 15:04:44.817196       8 log.go:172] (0xc0025a8580) (0xc000db55e0) Stream removed, broadcasting: 1
I0129 15:04:44.817207       8 log.go:172] (0xc0025a8580) Go away received
I0129 15:04:44.817766       8 log.go:172] (0xc0025a8580) (0xc000db55e0) Stream removed, broadcasting: 1
I0129 15:04:44.817787       8 log.go:172] (0xc0025a8580) (0xc001364460) Stream removed, broadcasting: 3
I0129 15:04:44.817795       8 log.go:172] (0xc0025a8580) (0xc0013645a0) Stream removed, broadcasting: 5
Jan 29 15:04:44.817: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 15:04:44.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-9188" for this suite.
Jan 29 15:05:08.892: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 15:05:09.041: INFO: namespace pod-network-test-9188 deletion completed in 24.209267563s

• [SLOW TEST:63.847 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 15:05:09.041: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 15:05:17.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-5019" for this suite.
Jan 29 15:05:23.335: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 15:05:23.455: INFO: namespace emptydir-wrapper-5019 deletion completed in 6.141883539s

• [SLOW TEST:14.414 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 15:05:23.455: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Jan 29 15:05:24.773: INFO: Pod name wrapped-volume-race-481f5f30-2d32-4015-9250-e4931272dcc3: Found 0 pods out of 5
Jan 29 15:05:29.795: INFO: Pod name wrapped-volume-race-481f5f30-2d32-4015-9250-e4931272dcc3: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-481f5f30-2d32-4015-9250-e4931272dcc3 in namespace emptydir-wrapper-8546, will wait for the garbage collector to delete the pods
Jan 29 15:05:57.912: INFO: Deleting ReplicationController wrapped-volume-race-481f5f30-2d32-4015-9250-e4931272dcc3 took: 21.120577ms
Jan 29 15:05:58.213: INFO: Terminating ReplicationController wrapped-volume-race-481f5f30-2d32-4015-9250-e4931272dcc3 pods took: 300.563019ms
STEP: Creating RC which spawns configmap-volume pods
Jan 29 15:06:46.966: INFO: Pod name wrapped-volume-race-141be1fc-e00e-48eb-b7b6-51d3a7743039: Found 0 pods out of 5
Jan 29 15:06:51.980: INFO: Pod name wrapped-volume-race-141be1fc-e00e-48eb-b7b6-51d3a7743039: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-141be1fc-e00e-48eb-b7b6-51d3a7743039 in namespace emptydir-wrapper-8546, will wait for the garbage collector to delete the pods
Jan 29 15:07:22.124: INFO: Deleting ReplicationController wrapped-volume-race-141be1fc-e00e-48eb-b7b6-51d3a7743039 took: 24.55734ms
Jan 29 15:07:22.425: INFO: Terminating ReplicationController wrapped-volume-race-141be1fc-e00e-48eb-b7b6-51d3a7743039 pods took: 300.946425ms
STEP: Creating RC which spawns configmap-volume pods
Jan 29 15:08:06.916: INFO: Pod name wrapped-volume-race-f0690745-b344-42ac-aae9-6ed671c02054: Found 0 pods out of 5
Jan 29 15:08:11.937: INFO: Pod name wrapped-volume-race-f0690745-b344-42ac-aae9-6ed671c02054: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-f0690745-b344-42ac-aae9-6ed671c02054 in namespace emptydir-wrapper-8546, will wait for the garbage collector to delete the pods
Jan 29 15:08:44.060: INFO: Deleting ReplicationController wrapped-volume-race-f0690745-b344-42ac-aae9-6ed671c02054 took: 24.558548ms
Jan 29 15:08:44.461: INFO: Terminating ReplicationController wrapped-volume-race-f0690745-b344-42ac-aae9-6ed671c02054 pods took: 400.856128ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 15:09:27.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-8546" for this suite.
Jan 29 15:09:37.905: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 15:09:37.997: INFO: namespace emptydir-wrapper-8546 deletion completed in 10.120200674s

• [SLOW TEST:254.542 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 29 15:09:37.997: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Jan 29 15:09:38.469: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8217,SelfLink:/api/v1/namespaces/watch-8217/configmaps/e2e-watch-test-label-changed,UID:ddb195bc-1f54-4162-9def-ce9339f3cbb1,ResourceVersion:22331023,Generation:0,CreationTimestamp:2020-01-29 15:09:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 29 15:09:38.469: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8217,SelfLink:/api/v1/namespaces/watch-8217/configmaps/e2e-watch-test-label-changed,UID:ddb195bc-1f54-4162-9def-ce9339f3cbb1,ResourceVersion:22331024,Generation:0,CreationTimestamp:2020-01-29 15:09:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jan 29 15:09:38.470: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8217,SelfLink:/api/v1/namespaces/watch-8217/configmaps/e2e-watch-test-label-changed,UID:ddb195bc-1f54-4162-9def-ce9339f3cbb1,ResourceVersion:22331025,Generation:0,CreationTimestamp:2020-01-29 15:09:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Jan 29 15:09:48.548: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8217,SelfLink:/api/v1/namespaces/watch-8217/configmaps/e2e-watch-test-label-changed,UID:ddb195bc-1f54-4162-9def-ce9339f3cbb1,ResourceVersion:22331040,Generation:0,CreationTimestamp:2020-01-29 15:09:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 29 15:09:48.549: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8217,SelfLink:/api/v1/namespaces/watch-8217/configmaps/e2e-watch-test-label-changed,UID:ddb195bc-1f54-4162-9def-ce9339f3cbb1,ResourceVersion:22331041,Generation:0,CreationTimestamp:2020-01-29 15:09:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Jan 29 15:09:48.549: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8217,SelfLink:/api/v1/namespaces/watch-8217/configmaps/e2e-watch-test-label-changed,UID:ddb195bc-1f54-4162-9def-ce9339f3cbb1,ResourceVersion:22331042,Generation:0,CreationTimestamp:2020-01-29 15:09:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 29 15:09:48.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-8217" for this suite.
Jan 29 15:09:54.585: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 15:09:54.693: INFO: namespace watch-8217 deletion completed in 6.137625254s

• [SLOW TEST:16.696 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSJan 29 15:09:54.694: INFO: Running AfterSuite actions on all nodes
Jan 29 15:09:54.694: INFO: Running AfterSuite actions on node 1
Jan 29 15:09:54.694: INFO: Skipping dumping logs from cluster

Ran 215 of 4412 Specs in 8026.936 seconds
SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped
PASS