I0128 21:08:54.643090 8 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0128 21:08:54.643781 8 e2e.go:109] Starting e2e run "452199e7-bf93-4c7a-b9a7-9962d737460b" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1580245733 - Will randomize all specs Will run 278 of 4814 specs Jan 28 21:08:54.713: INFO: >>> kubeConfig: /root/.kube/config Jan 28 21:08:54.717: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 28 21:08:54.739: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 28 21:08:54.771: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 28 21:08:54.771: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jan 28 21:08:54.771: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 28 21:08:54.782: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 28 21:08:54.782: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Jan 28 21:08:54.782: INFO: e2e test version: v1.17.0 Jan 28 21:08:54.783: INFO: kube-apiserver version: v1.17.0 Jan 28 21:08:54.783: INFO: >>> kubeConfig: /root/.kube/config Jan 28 21:08:54.787: INFO: Cluster IP family: ipv4 SSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:08:54.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl Jan 28 21:08:54.904: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:329 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller Jan 28 21:08:54.906: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5169' Jan 28 21:08:56.948: INFO: stderr: "" Jan 28 21:08:56.948: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 28 21:08:56.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5169' Jan 28 21:08:57.196: INFO: stderr: "" Jan 28 21:08:57.196: INFO: stdout: "update-demo-nautilus-58l58 update-demo-nautilus-t4lch " Jan 28 21:08:57.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-58l58 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5169' Jan 28 21:08:57.317: INFO: stderr: "" Jan 28 21:08:57.317: INFO: stdout: "" Jan 28 21:08:57.317: INFO: update-demo-nautilus-58l58 is created but not running Jan 28 21:09:02.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5169' Jan 28 21:09:02.934: INFO: stderr: "" Jan 28 21:09:02.934: INFO: stdout: "update-demo-nautilus-58l58 update-demo-nautilus-t4lch " Jan 28 21:09:02.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-58l58 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5169' Jan 28 21:09:03.184: INFO: stderr: "" Jan 28 21:09:03.184: INFO: stdout: "" Jan 28 21:09:03.184: INFO: update-demo-nautilus-58l58 is created but not running Jan 28 21:09:08.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5169' Jan 28 21:09:08.380: INFO: stderr: "" Jan 28 21:09:08.380: INFO: stdout: "update-demo-nautilus-58l58 update-demo-nautilus-t4lch " Jan 28 21:09:08.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-58l58 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5169' Jan 28 21:09:08.502: INFO: stderr: "" Jan 28 21:09:08.502: INFO: stdout: "true" Jan 28 21:09:08.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-58l58 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5169' Jan 28 21:09:08.644: INFO: stderr: "" Jan 28 21:09:08.644: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 28 21:09:08.644: INFO: validating pod update-demo-nautilus-58l58 Jan 28 21:09:08.660: INFO: got data: { "image": "nautilus.jpg" } Jan 28 21:09:08.660: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 28 21:09:08.660: INFO: update-demo-nautilus-58l58 is verified up and running Jan 28 21:09:08.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-t4lch -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5169' Jan 28 21:09:08.833: INFO: stderr: "" Jan 28 21:09:08.833: INFO: stdout: "true" Jan 28 21:09:08.834: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-t4lch -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5169' Jan 28 21:09:08.982: INFO: stderr: "" Jan 28 21:09:08.982: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 28 21:09:08.982: INFO: validating pod update-demo-nautilus-t4lch Jan 28 21:09:09.085: INFO: got data: { "image": "nautilus.jpg" } Jan 28 21:09:09.085: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 28 21:09:09.086: INFO: update-demo-nautilus-t4lch is verified up and running STEP: scaling down the replication controller Jan 28 21:09:09.088: INFO: scanned /root for discovery docs: Jan 28 21:09:09.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-5169' Jan 28 21:09:10.451: INFO: stderr: "" Jan 28 21:09:10.451: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 28 21:09:10.452: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5169' Jan 28 21:09:10.728: INFO: stderr: "" Jan 28 21:09:10.728: INFO: stdout: "update-demo-nautilus-58l58 update-demo-nautilus-t4lch " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 28 21:09:15.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5169' Jan 28 21:09:15.912: INFO: stderr: "" Jan 28 21:09:15.912: INFO: stdout: "update-demo-nautilus-58l58 update-demo-nautilus-t4lch " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 28 21:09:20.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5169' Jan 28 21:09:21.144: INFO: stderr: "" Jan 28 21:09:21.144: INFO: stdout: "update-demo-nautilus-58l58 update-demo-nautilus-t4lch " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 28 21:09:26.146: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5169' Jan 28 21:09:26.306: INFO: stderr: "" Jan 28 21:09:26.306: INFO: stdout: "update-demo-nautilus-58l58 " Jan 28 21:09:26.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-58l58 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5169' Jan 28 21:09:26.459: INFO: stderr: "" Jan 28 21:09:26.459: INFO: stdout: "true" Jan 28 21:09:26.459: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-58l58 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5169' Jan 28 21:09:26.680: INFO: stderr: "" Jan 28 21:09:26.680: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 28 21:09:26.680: INFO: validating pod update-demo-nautilus-58l58 Jan 28 21:09:26.685: INFO: got data: { "image": "nautilus.jpg" } Jan 28 21:09:26.685: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 28 21:09:26.685: INFO: update-demo-nautilus-58l58 is verified up and running STEP: scaling up the replication controller Jan 28 21:09:26.686: INFO: scanned /root for discovery docs: Jan 28 21:09:26.686: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-5169' Jan 28 21:09:27.960: INFO: stderr: "" Jan 28 21:09:27.960: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 28 21:09:27.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5169' Jan 28 21:09:28.153: INFO: stderr: "" Jan 28 21:09:28.153: INFO: stdout: "update-demo-nautilus-58l58 update-demo-nautilus-7c8q7 " Jan 28 21:09:28.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-58l58 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5169' Jan 28 21:09:28.383: INFO: stderr: "" Jan 28 21:09:28.383: INFO: stdout: "true" Jan 28 21:09:28.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-58l58 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5169' Jan 28 21:09:28.582: INFO: stderr: "" Jan 28 21:09:28.582: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 28 21:09:28.582: INFO: validating pod update-demo-nautilus-58l58 Jan 28 21:09:28.699: INFO: got data: { "image": "nautilus.jpg" } Jan 28 21:09:28.699: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 28 21:09:28.699: INFO: update-demo-nautilus-58l58 is verified up and running Jan 28 21:09:28.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7c8q7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5169' Jan 28 21:09:28.928: INFO: stderr: "" Jan 28 21:09:28.928: INFO: stdout: "" Jan 28 21:09:28.928: INFO: update-demo-nautilus-7c8q7 is created but not running Jan 28 21:09:33.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5169' Jan 28 21:09:34.237: INFO: stderr: "" Jan 28 21:09:34.237: INFO: stdout: "update-demo-nautilus-58l58 update-demo-nautilus-7c8q7 " Jan 28 21:09:34.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-58l58 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5169' Jan 28 21:09:34.325: INFO: stderr: "" Jan 28 21:09:34.325: INFO: stdout: "true" Jan 28 21:09:34.325: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-58l58 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5169' Jan 28 21:09:34.424: INFO: stderr: "" Jan 28 21:09:34.425: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 28 21:09:34.425: INFO: validating pod update-demo-nautilus-58l58 Jan 28 21:09:34.436: INFO: got data: { "image": "nautilus.jpg" } Jan 28 21:09:34.436: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 28 21:09:34.436: INFO: update-demo-nautilus-58l58 is verified up and running Jan 28 21:09:34.436: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7c8q7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5169' Jan 28 21:09:34.572: INFO: stderr: "" Jan 28 21:09:34.572: INFO: stdout: "true" Jan 28 21:09:34.573: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7c8q7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5169' Jan 28 21:09:34.690: INFO: stderr: "" Jan 28 21:09:34.690: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 28 21:09:34.690: INFO: validating pod update-demo-nautilus-7c8q7 Jan 28 21:09:34.694: INFO: got data: { "image": "nautilus.jpg" } Jan 28 21:09:34.694: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 28 21:09:34.694: INFO: update-demo-nautilus-7c8q7 is verified up and running STEP: using delete to clean up resources Jan 28 21:09:34.694: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5169' Jan 28 21:09:34.856: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 28 21:09:34.856: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jan 28 21:09:34.857: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5169' Jan 28 21:09:34.969: INFO: stderr: "No resources found in kubectl-5169 namespace.\n" Jan 28 21:09:34.969: INFO: stdout: "" Jan 28 21:09:34.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5169 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 28 21:09:35.203: INFO: stderr: "" Jan 28 21:09:35.204: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:09:35.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5169" for this suite. • [SLOW TEST:40.436 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:327 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":278,"completed":1,"skipped":8,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:09:35.224: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-projected-all-test-volume-2084d56b-a118-4a43-b56d-e3fbcb8a6c0c STEP: Creating secret with name secret-projected-all-test-volume-c78fac04-a164-4bd6-b2c8-a70e42965cda STEP: Creating a pod to test Check all projections for projected volume plugin Jan 28 21:09:35.564: INFO: Waiting up to 5m0s for pod "projected-volume-f90c536d-cf40-4135-9541-8e8dfb1d2e8c" in namespace "projected-2290" to be "success or failure" Jan 28 21:09:35.579: INFO: Pod "projected-volume-f90c536d-cf40-4135-9541-8e8dfb1d2e8c": Phase="Pending", Reason="", readiness=false. Elapsed: 14.527412ms Jan 28 21:09:37.974: INFO: Pod "projected-volume-f90c536d-cf40-4135-9541-8e8dfb1d2e8c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.409275962s Jan 28 21:09:39.982: INFO: Pod "projected-volume-f90c536d-cf40-4135-9541-8e8dfb1d2e8c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.41717868s Jan 28 21:09:41.997: INFO: Pod "projected-volume-f90c536d-cf40-4135-9541-8e8dfb1d2e8c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.432202927s Jan 28 21:09:44.002: INFO: Pod "projected-volume-f90c536d-cf40-4135-9541-8e8dfb1d2e8c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.437931324s Jan 28 21:09:46.013: INFO: Pod "projected-volume-f90c536d-cf40-4135-9541-8e8dfb1d2e8c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.448439771s Jan 28 21:09:48.018: INFO: Pod "projected-volume-f90c536d-cf40-4135-9541-8e8dfb1d2e8c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.453809237s STEP: Saw pod success Jan 28 21:09:48.018: INFO: Pod "projected-volume-f90c536d-cf40-4135-9541-8e8dfb1d2e8c" satisfied condition "success or failure" Jan 28 21:09:48.021: INFO: Trying to get logs from node jerma-node pod projected-volume-f90c536d-cf40-4135-9541-8e8dfb1d2e8c container projected-all-volume-test: STEP: delete the pod Jan 28 21:09:48.169: INFO: Waiting for pod projected-volume-f90c536d-cf40-4135-9541-8e8dfb1d2e8c to disappear Jan 28 21:09:48.176: INFO: Pod projected-volume-f90c536d-cf40-4135-9541-8e8dfb1d2e8c no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:09:48.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2290" for this suite. • [SLOW TEST:12.964 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":2,"skipped":35,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:09:48.189: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jan 28 21:09:56.408: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:09:56.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1581" for this suite. • [SLOW TEST:8.278 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":3,"skipped":57,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:09:56.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-rjlf STEP: Creating a pod to test atomic-volume-subpath Jan 28 21:09:56.731: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-rjlf" in namespace "subpath-1998" to be "success or failure" Jan 28 21:09:56.757: INFO: Pod "pod-subpath-test-configmap-rjlf": Phase="Pending", Reason="", readiness=false. Elapsed: 25.999342ms Jan 28 21:09:58.766: INFO: Pod "pod-subpath-test-configmap-rjlf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035683249s Jan 28 21:10:00.778: INFO: Pod "pod-subpath-test-configmap-rjlf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047682916s Jan 28 21:10:02.783: INFO: Pod "pod-subpath-test-configmap-rjlf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052291507s Jan 28 21:10:04.791: INFO: Pod "pod-subpath-test-configmap-rjlf": Phase="Running", Reason="", readiness=true. Elapsed: 8.060448821s Jan 28 21:10:07.013: INFO: Pod "pod-subpath-test-configmap-rjlf": Phase="Running", Reason="", readiness=true. Elapsed: 10.281778591s Jan 28 21:10:09.020: INFO: Pod "pod-subpath-test-configmap-rjlf": Phase="Running", Reason="", readiness=true. Elapsed: 12.289055509s Jan 28 21:10:11.027: INFO: Pod "pod-subpath-test-configmap-rjlf": Phase="Running", Reason="", readiness=true. Elapsed: 14.296110773s Jan 28 21:10:13.032: INFO: Pod "pod-subpath-test-configmap-rjlf": Phase="Running", Reason="", readiness=true. Elapsed: 16.301720969s Jan 28 21:10:15.039: INFO: Pod "pod-subpath-test-configmap-rjlf": Phase="Running", Reason="", readiness=true. Elapsed: 18.308504248s Jan 28 21:10:17.049: INFO: Pod "pod-subpath-test-configmap-rjlf": Phase="Running", Reason="", readiness=true. Elapsed: 20.318022722s Jan 28 21:10:19.058: INFO: Pod "pod-subpath-test-configmap-rjlf": Phase="Running", Reason="", readiness=true. Elapsed: 22.327554326s Jan 28 21:10:21.073: INFO: Pod "pod-subpath-test-configmap-rjlf": Phase="Running", Reason="", readiness=true. Elapsed: 24.34193063s Jan 28 21:10:23.079: INFO: Pod "pod-subpath-test-configmap-rjlf": Phase="Running", Reason="", readiness=true. Elapsed: 26.348493245s Jan 28 21:10:25.085: INFO: Pod "pod-subpath-test-configmap-rjlf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.354325309s STEP: Saw pod success Jan 28 21:10:25.085: INFO: Pod "pod-subpath-test-configmap-rjlf" satisfied condition "success or failure" Jan 28 21:10:25.091: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-configmap-rjlf container test-container-subpath-configmap-rjlf: STEP: delete the pod Jan 28 21:10:25.138: INFO: Waiting for pod pod-subpath-test-configmap-rjlf to disappear Jan 28 21:10:25.148: INFO: Pod pod-subpath-test-configmap-rjlf no longer exists STEP: Deleting pod pod-subpath-test-configmap-rjlf Jan 28 21:10:25.148: INFO: Deleting pod "pod-subpath-test-configmap-rjlf" in namespace "subpath-1998" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:10:25.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1998" for this suite. • [SLOW TEST:28.696 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":4,"skipped":74,"failed":0} SSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:10:25.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 28 21:10:25.325: INFO: Pod name rollover-pod: Found 0 pods out of 1 Jan 28 21:10:30.359: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 28 21:10:34.372: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Jan 28 21:10:36.379: INFO: Creating deployment "test-rollover-deployment" Jan 28 21:10:36.416: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Jan 28 21:10:38.432: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Jan 28 21:10:38.439: INFO: Ensure that both replica sets have 1 created replica Jan 28 21:10:38.450: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Jan 28 21:10:38.461: INFO: Updating deployment test-rollover-deployment Jan 28 21:10:38.462: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Jan 28 21:10:40.567: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Jan 28 21:10:40.580: INFO: Make sure deployment "test-rollover-deployment" is complete Jan 28 21:10:40.596: INFO: all replica sets need to contain the pod-template-hash label Jan 28 21:10:40.597: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715842636, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715842636, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715842638, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715842636, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 28 21:10:42.631: INFO: all replica sets need to contain the pod-template-hash label Jan 28 21:10:42.632: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715842636, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715842636, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715842638, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715842636, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 28 21:10:44.616: INFO: all replica sets need to contain the pod-template-hash label Jan 28 21:10:44.617: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715842636, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715842636, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715842638, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715842636, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 28 21:10:46.614: INFO: all replica sets need to contain the pod-template-hash label Jan 28 21:10:46.614: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715842636, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715842636, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715842638, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715842636, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 28 21:10:48.613: INFO: all replica sets need to contain the pod-template-hash label Jan 28 21:10:48.613: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715842636, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715842636, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715842646, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715842636, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 28 21:10:50.614: INFO: all replica sets need to contain the pod-template-hash label Jan 28 21:10:50.614: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715842636, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715842636, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715842646, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715842636, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 28 21:10:52.644: INFO: all replica sets need to contain the pod-template-hash label Jan 28 21:10:52.644: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715842636, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715842636, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715842646, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715842636, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 28 21:10:54.612: INFO: all replica sets need to contain the pod-template-hash label Jan 28 21:10:54.612: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715842636, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715842636, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715842646, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715842636, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 28 21:10:56.623: INFO: all replica sets need to contain the pod-template-hash label Jan 28 21:10:56.623: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715842636, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715842636, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715842646, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715842636, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 28 21:10:58.618: INFO: Jan 28 21:10:58.618: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Jan 28 21:10:58.780: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-4932 /apis/apps/v1/namespaces/deployment-4932/deployments/test-rollover-deployment d93c9aff-001f-4eb2-87a6-7ba58d659ee9 4956888 2 2020-01-28 21:10:36 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002c36368 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-01-28 21:10:36 +0000 UTC,LastTransitionTime:2020-01-28 21:10:36 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-01-28 21:10:56 +0000 UTC,LastTransitionTime:2020-01-28 21:10:36 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jan 28 21:10:58.783: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff deployment-4932 /apis/apps/v1/namespaces/deployment-4932/replicasets/test-rollover-deployment-574d6dfbff 93d5ae41-9c99-4f42-8173-b4890a54305b 4956876 2 2020-01-28 21:10:38 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment d93c9aff-001f-4eb2-87a6-7ba58d659ee9 0xc002c367b7 0xc002c367b8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002c36828 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jan 28 21:10:58.783: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Jan 28 21:10:58.783: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-4932 /apis/apps/v1/namespaces/deployment-4932/replicasets/test-rollover-controller 6d144013-d9df-40de-a945-cbdbae82d37e 4956887 2 2020-01-28 21:10:25 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment d93c9aff-001f-4eb2-87a6-7ba58d659ee9 0xc002c366e7 0xc002c366e8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002c36748 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 28 21:10:58.783: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-4932 /apis/apps/v1/namespaces/deployment-4932/replicasets/test-rollover-deployment-f6c94f66c 4d231a16-a687-4dc6-a5c6-8eebe3d558fa 4956823 2 2020-01-28 21:10:36 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment d93c9aff-001f-4eb2-87a6-7ba58d659ee9 0xc002c36890 0xc002c36891}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002c36908 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 28 21:10:58.786: INFO: Pod "test-rollover-deployment-574d6dfbff-xk8hz" is available: &Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-xk8hz test-rollover-deployment-574d6dfbff- deployment-4932 /api/v1/namespaces/deployment-4932/pods/test-rollover-deployment-574d6dfbff-xk8hz 0479c652-a1f8-43fa-9852-ac1092cf015a 4956850 0 2020-01-28 21:10:38 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff 93d5ae41-9c99-4f42-8173-b4890a54305b 0xc002b54497 0xc002b54498}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-99v9f,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-99v9f,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-99v9f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 21:10:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 21:10:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 21:10:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 21:10:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.2,StartTime:2020-01-28 21:10:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-28 21:10:45 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://e8d38731548c24313c157d536fcb5f2f6fad3b9095f6563250e7dfa1e97a0e3e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:10:58.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4932" for this suite. • [SLOW TEST:33.628 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":5,"skipped":77,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:10:58.793: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1713 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jan 28 21:10:58.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-5705' Jan 28 21:10:59.143: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 28 21:10:59.143: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the deployment e2e-test-httpd-deployment was created STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created [AfterEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1718 Jan 28 21:11:01.303: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-5705' Jan 28 21:11:01.524: INFO: stderr: "" Jan 28 21:11:01.524: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:11:01.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5705" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance]","total":278,"completed":6,"skipped":83,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:11:01.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Jan 28 21:11:01.736: INFO: Waiting up to 5m0s for pod "downward-api-531834ca-8749-4c3d-bdb6-ef9b2eecdbae" in namespace "downward-api-2246" to be "success or failure" Jan 28 21:11:01.764: INFO: Pod "downward-api-531834ca-8749-4c3d-bdb6-ef9b2eecdbae": Phase="Pending", Reason="", readiness=false. Elapsed: 27.064723ms Jan 28 21:11:03.778: INFO: Pod "downward-api-531834ca-8749-4c3d-bdb6-ef9b2eecdbae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041826273s Jan 28 21:11:05.787: INFO: Pod "downward-api-531834ca-8749-4c3d-bdb6-ef9b2eecdbae": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049916414s Jan 28 21:11:07.798: INFO: Pod "downward-api-531834ca-8749-4c3d-bdb6-ef9b2eecdbae": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061347446s Jan 28 21:11:09.826: INFO: Pod "downward-api-531834ca-8749-4c3d-bdb6-ef9b2eecdbae": Phase="Pending", Reason="", readiness=false. Elapsed: 8.088952838s Jan 28 21:11:11.856: INFO: Pod "downward-api-531834ca-8749-4c3d-bdb6-ef9b2eecdbae": Phase="Pending", Reason="", readiness=false. Elapsed: 10.119565631s Jan 28 21:11:13.871: INFO: Pod "downward-api-531834ca-8749-4c3d-bdb6-ef9b2eecdbae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.13473845s STEP: Saw pod success Jan 28 21:11:13.872: INFO: Pod "downward-api-531834ca-8749-4c3d-bdb6-ef9b2eecdbae" satisfied condition "success or failure" Jan 28 21:11:13.879: INFO: Trying to get logs from node jerma-node pod downward-api-531834ca-8749-4c3d-bdb6-ef9b2eecdbae container dapi-container: STEP: delete the pod Jan 28 21:11:13.942: INFO: Waiting for pod downward-api-531834ca-8749-4c3d-bdb6-ef9b2eecdbae to disappear Jan 28 21:11:13.951: INFO: Pod downward-api-531834ca-8749-4c3d-bdb6-ef9b2eecdbae no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:11:13.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2246" for this suite. • [SLOW TEST:12.394 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":7,"skipped":97,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:11:13.963: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-0c21dd9f-cc31-4e52-9994-d09917259398 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-0c21dd9f-cc31-4e52-9994-d09917259398 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:11:22.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7877" for this suite. • [SLOW TEST:8.239 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":8,"skipped":107,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:11:22.204: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium Jan 28 21:11:22.286: INFO: Waiting up to 5m0s for pod "pod-f821310a-ae57-48fd-b77e-a55645c69369" in namespace "emptydir-6562" to be "success or failure" Jan 28 21:11:22.299: INFO: Pod "pod-f821310a-ae57-48fd-b77e-a55645c69369": Phase="Pending", Reason="", readiness=false. Elapsed: 12.539009ms Jan 28 21:11:24.304: INFO: Pod "pod-f821310a-ae57-48fd-b77e-a55645c69369": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017827399s Jan 28 21:11:26.341: INFO: Pod "pod-f821310a-ae57-48fd-b77e-a55645c69369": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05404756s Jan 28 21:11:28.354: INFO: Pod "pod-f821310a-ae57-48fd-b77e-a55645c69369": Phase="Pending", Reason="", readiness=false. Elapsed: 6.067691202s Jan 28 21:11:30.373: INFO: Pod "pod-f821310a-ae57-48fd-b77e-a55645c69369": Phase="Pending", Reason="", readiness=false. Elapsed: 8.086239089s Jan 28 21:11:32.378: INFO: Pod "pod-f821310a-ae57-48fd-b77e-a55645c69369": Phase="Pending", Reason="", readiness=false. Elapsed: 10.091199725s Jan 28 21:11:34.387: INFO: Pod "pod-f821310a-ae57-48fd-b77e-a55645c69369": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.100399105s STEP: Saw pod success Jan 28 21:11:34.387: INFO: Pod "pod-f821310a-ae57-48fd-b77e-a55645c69369" satisfied condition "success or failure" Jan 28 21:11:34.391: INFO: Trying to get logs from node jerma-node pod pod-f821310a-ae57-48fd-b77e-a55645c69369 container test-container: STEP: delete the pod Jan 28 21:11:34.485: INFO: Waiting for pod pod-f821310a-ae57-48fd-b77e-a55645c69369 to disappear Jan 28 21:11:34.498: INFO: Pod pod-f821310a-ae57-48fd-b77e-a55645c69369 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:11:34.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6562" for this suite. • [SLOW TEST:12.433 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":9,"skipped":114,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:11:34.638: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-551f8828-d703-4032-848a-44ee9b268880 in namespace container-probe-8337 Jan 28 21:11:42.822: INFO: Started pod busybox-551f8828-d703-4032-848a-44ee9b268880 in namespace container-probe-8337 STEP: checking the pod's current state and verifying that restartCount is present Jan 28 21:11:42.829: INFO: Initial restart count of pod busybox-551f8828-d703-4032-848a-44ee9b268880 is 0 Jan 28 21:12:39.171: INFO: Restart count of pod container-probe-8337/busybox-551f8828-d703-4032-848a-44ee9b268880 is now 1 (56.341833224s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:12:39.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8337" for this suite. • [SLOW TEST:64.614 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":10,"skipped":128,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:12:39.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-de057300-16cd-4a30-bb4b-6f473772a100 STEP: Creating a pod to test consume configMaps Jan 28 21:12:39.368: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b433e6f3-2c27-4d6d-8c2b-23d6ebdc4956" in namespace "projected-9932" to be "success or failure" Jan 28 21:12:39.390: INFO: Pod "pod-projected-configmaps-b433e6f3-2c27-4d6d-8c2b-23d6ebdc4956": Phase="Pending", Reason="", readiness=false. Elapsed: 22.241759ms Jan 28 21:12:41.397: INFO: Pod "pod-projected-configmaps-b433e6f3-2c27-4d6d-8c2b-23d6ebdc4956": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028996554s Jan 28 21:12:43.405: INFO: Pod "pod-projected-configmaps-b433e6f3-2c27-4d6d-8c2b-23d6ebdc4956": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037684952s Jan 28 21:12:45.413: INFO: Pod "pod-projected-configmaps-b433e6f3-2c27-4d6d-8c2b-23d6ebdc4956": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045426148s Jan 28 21:12:47.420: INFO: Pod "pod-projected-configmaps-b433e6f3-2c27-4d6d-8c2b-23d6ebdc4956": Phase="Pending", Reason="", readiness=false. Elapsed: 8.052681668s Jan 28 21:12:49.426: INFO: Pod "pod-projected-configmaps-b433e6f3-2c27-4d6d-8c2b-23d6ebdc4956": Phase="Pending", Reason="", readiness=false. Elapsed: 10.0588911s Jan 28 21:12:51.432: INFO: Pod "pod-projected-configmaps-b433e6f3-2c27-4d6d-8c2b-23d6ebdc4956": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.064616736s STEP: Saw pod success Jan 28 21:12:51.432: INFO: Pod "pod-projected-configmaps-b433e6f3-2c27-4d6d-8c2b-23d6ebdc4956" satisfied condition "success or failure" Jan 28 21:12:51.435: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-b433e6f3-2c27-4d6d-8c2b-23d6ebdc4956 container projected-configmap-volume-test: STEP: delete the pod Jan 28 21:12:51.506: INFO: Waiting for pod pod-projected-configmaps-b433e6f3-2c27-4d6d-8c2b-23d6ebdc4956 to disappear Jan 28 21:12:51.517: INFO: Pod pod-projected-configmaps-b433e6f3-2c27-4d6d-8c2b-23d6ebdc4956 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:12:51.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9932" for this suite. • [SLOW TEST:12.275 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":11,"skipped":150,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:12:51.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 28 21:12:52.346: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 28 21:12:54.364: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715842772, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715842772, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715842772, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715842772, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 28 21:12:56.374: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715842772, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715842772, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715842772, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715842772, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 28 21:12:58.372: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715842772, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715842772, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715842772, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715842772, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 28 21:13:01.404: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:13:11.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4187" for this suite. STEP: Destroying namespace "webhook-4187-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:20.265 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":12,"skipped":195,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:13:11.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Jan 28 21:13:11.922: INFO: Waiting up to 5m0s for pod "pod-b590147b-cad7-4487-800f-ca7f0615c2cf" in namespace "emptydir-6698" to be "success or failure" Jan 28 21:13:11.948: INFO: Pod "pod-b590147b-cad7-4487-800f-ca7f0615c2cf": Phase="Pending", Reason="", readiness=false. Elapsed: 25.69361ms Jan 28 21:13:13.957: INFO: Pod "pod-b590147b-cad7-4487-800f-ca7f0615c2cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033979419s Jan 28 21:13:15.963: INFO: Pod "pod-b590147b-cad7-4487-800f-ca7f0615c2cf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040168092s Jan 28 21:13:18.001: INFO: Pod "pod-b590147b-cad7-4487-800f-ca7f0615c2cf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.077888309s Jan 28 21:13:20.007: INFO: Pod "pod-b590147b-cad7-4487-800f-ca7f0615c2cf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.084407052s Jan 28 21:13:22.032: INFO: Pod "pod-b590147b-cad7-4487-800f-ca7f0615c2cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.108846924s STEP: Saw pod success Jan 28 21:13:22.032: INFO: Pod "pod-b590147b-cad7-4487-800f-ca7f0615c2cf" satisfied condition "success or failure" Jan 28 21:13:22.038: INFO: Trying to get logs from node jerma-node pod pod-b590147b-cad7-4487-800f-ca7f0615c2cf container test-container: STEP: delete the pod Jan 28 21:13:22.182: INFO: Waiting for pod pod-b590147b-cad7-4487-800f-ca7f0615c2cf to disappear Jan 28 21:13:22.190: INFO: Pod pod-b590147b-cad7-4487-800f-ca7f0615c2cf no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:13:22.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6698" for this suite. • [SLOW TEST:10.414 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":13,"skipped":196,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:13:22.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Jan 28 21:13:22.336: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:13:32.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6611" for this suite. • [SLOW TEST:10.922 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":14,"skipped":249,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:13:33.135: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jan 28 21:13:33.304: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dc4db4e3-f7b6-4857-bb92-ede63ec7c72a" in namespace "projected-3195" to be "success or failure" Jan 28 21:13:33.310: INFO: Pod "downwardapi-volume-dc4db4e3-f7b6-4857-bb92-ede63ec7c72a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.283513ms Jan 28 21:13:35.321: INFO: Pod "downwardapi-volume-dc4db4e3-f7b6-4857-bb92-ede63ec7c72a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017695604s Jan 28 21:13:37.329: INFO: Pod "downwardapi-volume-dc4db4e3-f7b6-4857-bb92-ede63ec7c72a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025607417s Jan 28 21:13:39.336: INFO: Pod "downwardapi-volume-dc4db4e3-f7b6-4857-bb92-ede63ec7c72a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031796259s Jan 28 21:13:41.342: INFO: Pod "downwardapi-volume-dc4db4e3-f7b6-4857-bb92-ede63ec7c72a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.03865681s STEP: Saw pod success Jan 28 21:13:41.343: INFO: Pod "downwardapi-volume-dc4db4e3-f7b6-4857-bb92-ede63ec7c72a" satisfied condition "success or failure" Jan 28 21:13:41.347: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-dc4db4e3-f7b6-4857-bb92-ede63ec7c72a container client-container: STEP: delete the pod Jan 28 21:13:41.581: INFO: Waiting for pod downwardapi-volume-dc4db4e3-f7b6-4857-bb92-ede63ec7c72a to disappear Jan 28 21:13:41.595: INFO: Pod downwardapi-volume-dc4db4e3-f7b6-4857-bb92-ede63ec7c72a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:13:41.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3195" for this suite. • [SLOW TEST:8.478 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":15,"skipped":264,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:13:41.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 28 21:13:41.917: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Jan 28 21:13:46.988: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 28 21:13:49.000: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Jan 28 21:13:49.085: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-9647 /apis/apps/v1/namespaces/deployment-9647/deployments/test-cleanup-deployment b494b929-d73a-4939-b9bc-8f1979abbdc9 4957645 1 2020-01-28 21:13:49 +0000 UTC map[name:cleanup-pod] map[] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0007c2028 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Jan 28 21:13:49.117: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6 deployment-9647 /apis/apps/v1/namespaces/deployment-9647/replicasets/test-cleanup-deployment-55ffc6b7b6 5e08e418-27b6-4028-bd25-485e1360cba7 4957647 1 2020-01-28 21:13:49 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment b494b929-d73a-4939-b9bc-8f1979abbdc9 0xc000a9a387 0xc000a9a388}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000a9a528 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 28 21:13:49.117: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Jan 28 21:13:49.117: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-9647 /apis/apps/v1/namespaces/deployment-9647/replicasets/test-cleanup-controller 0f8089a6-09e1-43fc-99ce-ceab32eccd84 4957646 1 2020-01-28 21:13:41 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment b494b929-d73a-4939-b9bc-8f1979abbdc9 0xc000a9a1af 0xc000a9a1c0}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc000a9a2d8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jan 28 21:13:49.154: INFO: Pod "test-cleanup-controller-8b6q2" is available: &Pod{ObjectMeta:{test-cleanup-controller-8b6q2 test-cleanup-controller- deployment-9647 /api/v1/namespaces/deployment-9647/pods/test-cleanup-controller-8b6q2 b8c1bd65-7db2-4597-9edd-67cda7292946 4957638 0 2020-01-28 21:13:41 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 0f8089a6-09e1-43fc-99ce-ceab32eccd84 0xc0006d7ec7 0xc0006d7ec8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cqfxr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cqfxr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cqfxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 21:13:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 21:13:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 21:13:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 21:13:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.1,StartTime:2020-01-28 21:13:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-28 21:13:47 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://055cfdf852ad4b9555e3b8d0d9cd861b0c04357fd5f6784e237e0a30d56b0587,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.1,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 28 21:13:49.155: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-9hczs" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-9hczs test-cleanup-deployment-55ffc6b7b6- deployment-9647 /api/v1/namespaces/deployment-9647/pods/test-cleanup-deployment-55ffc6b7b6-9hczs 50b6b386-3060-4efe-b3a3-d814d3782e8c 4957652 0 2020-01-28 21:13:49 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 5e08e418-27b6-4028-bd25-485e1360cba7 0xc000a50837 0xc000a50838}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cqfxr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cqfxr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cqfxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 21:13:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:13:49.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9647" for this suite. • [SLOW TEST:7.614 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":16,"skipped":336,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:13:49.230: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jan 28 21:13:49.374: INFO: Waiting up to 5m0s for pod "downwardapi-volume-eb961067-0d4c-4db4-ba14-eb394baf094f" in namespace "downward-api-9270" to be "success or failure" Jan 28 21:13:49.383: INFO: Pod "downwardapi-volume-eb961067-0d4c-4db4-ba14-eb394baf094f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.750427ms Jan 28 21:13:51.392: INFO: Pod "downwardapi-volume-eb961067-0d4c-4db4-ba14-eb394baf094f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017723521s Jan 28 21:13:53.401: INFO: Pod "downwardapi-volume-eb961067-0d4c-4db4-ba14-eb394baf094f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026779448s Jan 28 21:13:55.415: INFO: Pod "downwardapi-volume-eb961067-0d4c-4db4-ba14-eb394baf094f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040726448s Jan 28 21:13:57.423: INFO: Pod "downwardapi-volume-eb961067-0d4c-4db4-ba14-eb394baf094f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.049081507s Jan 28 21:13:59.432: INFO: Pod "downwardapi-volume-eb961067-0d4c-4db4-ba14-eb394baf094f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.058285842s Jan 28 21:14:01.441: INFO: Pod "downwardapi-volume-eb961067-0d4c-4db4-ba14-eb394baf094f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.067080755s Jan 28 21:14:03.449: INFO: Pod "downwardapi-volume-eb961067-0d4c-4db4-ba14-eb394baf094f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.074633664s STEP: Saw pod success Jan 28 21:14:03.449: INFO: Pod "downwardapi-volume-eb961067-0d4c-4db4-ba14-eb394baf094f" satisfied condition "success or failure" Jan 28 21:14:03.456: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-eb961067-0d4c-4db4-ba14-eb394baf094f container client-container: STEP: delete the pod Jan 28 21:14:03.603: INFO: Waiting for pod downwardapi-volume-eb961067-0d4c-4db4-ba14-eb394baf094f to disappear Jan 28 21:14:03.620: INFO: Pod downwardapi-volume-eb961067-0d4c-4db4-ba14-eb394baf094f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:14:03.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9270" for this suite. • [SLOW TEST:14.406 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":17,"skipped":347,"failed":0} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:14:03.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 28 21:14:04.314: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 28 21:14:06.330: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715842844, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715842844, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715842844, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715842844, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 28 21:14:08.337: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715842844, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715842844, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715842844, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715842844, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 28 21:14:10.339: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715842844, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715842844, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715842844, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715842844, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 28 21:14:13.358: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Jan 28 21:14:13.410: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:14:13.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4921" for this suite. STEP: Destroying namespace "webhook-4921-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.966 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":18,"skipped":347,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:14:13.606: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Jan 28 21:14:13.660: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 28 21:14:13.698: INFO: Waiting for terminating namespaces to be deleted... Jan 28 21:14:13.702: INFO: Logging pods the kubelet thinks is on node jerma-node before test Jan 28 21:14:13.711: INFO: sample-webhook-deployment-5f65f8c764-whkvx from webhook-4921 started at 2020-01-28 21:14:04 +0000 UTC (1 container statuses recorded) Jan 28 21:14:13.711: INFO: Container sample-webhook ready: true, restart count 0 Jan 28 21:14:13.711: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded) Jan 28 21:14:13.711: INFO: Container kube-proxy ready: true, restart count 0 Jan 28 21:14:13.711: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded) Jan 28 21:14:13.711: INFO: Container weave ready: true, restart count 1 Jan 28 21:14:13.711: INFO: Container weave-npc ready: true, restart count 0 Jan 28 21:14:13.711: INFO: Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test Jan 28 21:14:13.727: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Jan 28 21:14:13.727: INFO: Container coredns ready: true, restart count 0 Jan 28 21:14:13.727: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Jan 28 21:14:13.727: INFO: Container coredns ready: true, restart count 0 Jan 28 21:14:13.727: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Jan 28 21:14:13.727: INFO: Container kube-controller-manager ready: true, restart count 3 Jan 28 21:14:13.727: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded) Jan 28 21:14:13.727: INFO: Container kube-proxy ready: true, restart count 0 Jan 28 21:14:13.727: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded) Jan 28 21:14:13.727: INFO: Container weave ready: true, restart count 0 Jan 28 21:14:13.727: INFO: Container weave-npc ready: true, restart count 0 Jan 28 21:14:13.727: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Jan 28 21:14:13.727: INFO: Container kube-scheduler ready: true, restart count 4 Jan 28 21:14:13.727: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Jan 28 21:14:13.727: INFO: Container kube-apiserver ready: true, restart count 1 Jan 28 21:14:13.727: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Jan 28 21:14:13.727: INFO: Container etcd ready: true, restart count 1 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-c2c30ce1-ba70-4e40-ba46-ef172490b888 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-c2c30ce1-ba70-4e40-ba46-ef172490b888 off the node jerma-node STEP: verifying the node doesn't have the label kubernetes.io/e2e-c2c30ce1-ba70-4e40-ba46-ef172490b888 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:19:32.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8888" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:318.469 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":19,"skipped":420,"failed":0} SSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:19:32.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-ce68cfb5-1437-43d9-b517-7c9426759e64 STEP: Creating a pod to test consume secrets Jan 28 21:19:32.262: INFO: Waiting up to 5m0s for pod "pod-secrets-558c3eb4-c1ec-44eb-8aa8-ecf74573e30d" in namespace "secrets-6301" to be "success or failure" Jan 28 21:19:32.274: INFO: Pod "pod-secrets-558c3eb4-c1ec-44eb-8aa8-ecf74573e30d": Phase="Pending", Reason="", readiness=false. Elapsed: 11.45576ms Jan 28 21:19:34.280: INFO: Pod "pod-secrets-558c3eb4-c1ec-44eb-8aa8-ecf74573e30d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017622929s Jan 28 21:19:36.292: INFO: Pod "pod-secrets-558c3eb4-c1ec-44eb-8aa8-ecf74573e30d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030187685s Jan 28 21:19:38.371: INFO: Pod "pod-secrets-558c3eb4-c1ec-44eb-8aa8-ecf74573e30d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.109112417s Jan 28 21:19:40.382: INFO: Pod "pod-secrets-558c3eb4-c1ec-44eb-8aa8-ecf74573e30d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.120212902s STEP: Saw pod success Jan 28 21:19:40.383: INFO: Pod "pod-secrets-558c3eb4-c1ec-44eb-8aa8-ecf74573e30d" satisfied condition "success or failure" Jan 28 21:19:40.389: INFO: Trying to get logs from node jerma-node pod pod-secrets-558c3eb4-c1ec-44eb-8aa8-ecf74573e30d container secret-volume-test: STEP: delete the pod Jan 28 21:19:40.458: INFO: Waiting for pod pod-secrets-558c3eb4-c1ec-44eb-8aa8-ecf74573e30d to disappear Jan 28 21:19:40.466: INFO: Pod pod-secrets-558c3eb4-c1ec-44eb-8aa8-ecf74573e30d no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:19:40.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6301" for this suite. • [SLOW TEST:8.402 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":20,"skipped":425,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:19:40.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-f1a3cf2b-7d01-45b9-96d7-f217bfd61a49 STEP: Creating a pod to test consume configMaps Jan 28 21:19:40.652: INFO: Waiting up to 5m0s for pod "pod-configmaps-a1755129-ca0d-4e86-b0a7-6e3629b91104" in namespace "configmap-101" to be "success or failure" Jan 28 21:19:40.765: INFO: Pod "pod-configmaps-a1755129-ca0d-4e86-b0a7-6e3629b91104": Phase="Pending", Reason="", readiness=false. Elapsed: 113.006914ms Jan 28 21:19:42.778: INFO: Pod "pod-configmaps-a1755129-ca0d-4e86-b0a7-6e3629b91104": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125896133s Jan 28 21:19:44.787: INFO: Pod "pod-configmaps-a1755129-ca0d-4e86-b0a7-6e3629b91104": Phase="Pending", Reason="", readiness=false. Elapsed: 4.134965466s Jan 28 21:19:46.793: INFO: Pod "pod-configmaps-a1755129-ca0d-4e86-b0a7-6e3629b91104": Phase="Pending", Reason="", readiness=false. Elapsed: 6.140805604s Jan 28 21:19:48.799: INFO: Pod "pod-configmaps-a1755129-ca0d-4e86-b0a7-6e3629b91104": Phase="Pending", Reason="", readiness=false. Elapsed: 8.147672644s Jan 28 21:19:50.809: INFO: Pod "pod-configmaps-a1755129-ca0d-4e86-b0a7-6e3629b91104": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.156760206s STEP: Saw pod success Jan 28 21:19:50.809: INFO: Pod "pod-configmaps-a1755129-ca0d-4e86-b0a7-6e3629b91104" satisfied condition "success or failure" Jan 28 21:19:50.814: INFO: Trying to get logs from node jerma-node pod pod-configmaps-a1755129-ca0d-4e86-b0a7-6e3629b91104 container configmap-volume-test: STEP: delete the pod Jan 28 21:19:50.878: INFO: Waiting for pod pod-configmaps-a1755129-ca0d-4e86-b0a7-6e3629b91104 to disappear Jan 28 21:19:50.908: INFO: Pod pod-configmaps-a1755129-ca0d-4e86-b0a7-6e3629b91104 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:19:50.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-101" for this suite. • [SLOW TEST:10.451 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":21,"skipped":456,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:19:50.930: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Jan 28 21:19:51.144: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8016 /api/v1/namespaces/watch-8016/configmaps/e2e-watch-test-label-changed 4f27eafd-5164-4d14-b8a7-23ba809bf7de 4958678 0 2020-01-28 21:19:51 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 28 21:19:51.145: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8016 /api/v1/namespaces/watch-8016/configmaps/e2e-watch-test-label-changed 4f27eafd-5164-4d14-b8a7-23ba809bf7de 4958679 0 2020-01-28 21:19:51 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jan 28 21:19:51.145: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8016 /api/v1/namespaces/watch-8016/configmaps/e2e-watch-test-label-changed 4f27eafd-5164-4d14-b8a7-23ba809bf7de 4958680 0 2020-01-28 21:19:51 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Jan 28 21:20:01.284: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8016 /api/v1/namespaces/watch-8016/configmaps/e2e-watch-test-label-changed 4f27eafd-5164-4d14-b8a7-23ba809bf7de 4958718 0 2020-01-28 21:19:51 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 28 21:20:01.285: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8016 /api/v1/namespaces/watch-8016/configmaps/e2e-watch-test-label-changed 4f27eafd-5164-4d14-b8a7-23ba809bf7de 4958719 0 2020-01-28 21:19:51 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Jan 28 21:20:01.285: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8016 /api/v1/namespaces/watch-8016/configmaps/e2e-watch-test-label-changed 4f27eafd-5164-4d14-b8a7-23ba809bf7de 4958721 0 2020-01-28 21:19:51 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:20:01.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8016" for this suite. • [SLOW TEST:10.369 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":22,"skipped":504,"failed":0} SSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:20:01.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-6ce9167c-e3b1-4250-b499-48d3ca56bf4c STEP: Creating a pod to test consume configMaps Jan 28 21:20:01.402: INFO: Waiting up to 5m0s for pod "pod-configmaps-82dcecf7-d44a-41be-b12d-82b0792efd56" in namespace "configmap-1374" to be "success or failure" Jan 28 21:20:01.435: INFO: Pod "pod-configmaps-82dcecf7-d44a-41be-b12d-82b0792efd56": Phase="Pending", Reason="", readiness=false. Elapsed: 32.819921ms Jan 28 21:20:03.444: INFO: Pod "pod-configmaps-82dcecf7-d44a-41be-b12d-82b0792efd56": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041964456s Jan 28 21:20:05.453: INFO: Pod "pod-configmaps-82dcecf7-d44a-41be-b12d-82b0792efd56": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051428823s Jan 28 21:20:07.461: INFO: Pod "pod-configmaps-82dcecf7-d44a-41be-b12d-82b0792efd56": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059329431s Jan 28 21:20:09.469: INFO: Pod "pod-configmaps-82dcecf7-d44a-41be-b12d-82b0792efd56": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.067411007s STEP: Saw pod success Jan 28 21:20:09.470: INFO: Pod "pod-configmaps-82dcecf7-d44a-41be-b12d-82b0792efd56" satisfied condition "success or failure" Jan 28 21:20:09.475: INFO: Trying to get logs from node jerma-node pod pod-configmaps-82dcecf7-d44a-41be-b12d-82b0792efd56 container configmap-volume-test: STEP: delete the pod Jan 28 21:20:09.532: INFO: Waiting for pod pod-configmaps-82dcecf7-d44a-41be-b12d-82b0792efd56 to disappear Jan 28 21:20:09.628: INFO: Pod pod-configmaps-82dcecf7-d44a-41be-b12d-82b0792efd56 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:20:09.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1374" for this suite. • [SLOW TEST:8.352 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":23,"skipped":509,"failed":0} SSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:20:09.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-a4fcc55e-c7f6-45c5-a9a1-d2a430bf3199 STEP: Creating a pod to test consume secrets Jan 28 21:20:09.817: INFO: Waiting up to 5m0s for pod "pod-secrets-53ae5d64-30f3-4468-b979-f5f2765780cc" in namespace "secrets-3253" to be "success or failure" Jan 28 21:20:09.876: INFO: Pod "pod-secrets-53ae5d64-30f3-4468-b979-f5f2765780cc": Phase="Pending", Reason="", readiness=false. Elapsed: 58.972129ms Jan 28 21:20:11.888: INFO: Pod "pod-secrets-53ae5d64-30f3-4468-b979-f5f2765780cc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070531817s Jan 28 21:20:13.900: INFO: Pod "pod-secrets-53ae5d64-30f3-4468-b979-f5f2765780cc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082357255s Jan 28 21:20:15.908: INFO: Pod "pod-secrets-53ae5d64-30f3-4468-b979-f5f2765780cc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.090324282s Jan 28 21:20:17.928: INFO: Pod "pod-secrets-53ae5d64-30f3-4468-b979-f5f2765780cc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.110879338s STEP: Saw pod success Jan 28 21:20:17.930: INFO: Pod "pod-secrets-53ae5d64-30f3-4468-b979-f5f2765780cc" satisfied condition "success or failure" Jan 28 21:20:17.938: INFO: Trying to get logs from node jerma-node pod pod-secrets-53ae5d64-30f3-4468-b979-f5f2765780cc container secret-volume-test: STEP: delete the pod Jan 28 21:20:18.110: INFO: Waiting for pod pod-secrets-53ae5d64-30f3-4468-b979-f5f2765780cc to disappear Jan 28 21:20:18.123: INFO: Pod pod-secrets-53ae5d64-30f3-4468-b979-f5f2765780cc no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:20:18.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3253" for this suite. • [SLOW TEST:8.483 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":24,"skipped":512,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:20:18.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Jan 28 21:20:18.307: INFO: Waiting up to 5m0s for pod "pod-236ceb9d-1e72-4f03-bfa6-9b48cf22484b" in namespace "emptydir-999" to be "success or failure" Jan 28 21:20:18.335: INFO: Pod "pod-236ceb9d-1e72-4f03-bfa6-9b48cf22484b": Phase="Pending", Reason="", readiness=false. Elapsed: 27.212043ms Jan 28 21:20:20.341: INFO: Pod "pod-236ceb9d-1e72-4f03-bfa6-9b48cf22484b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033582952s Jan 28 21:20:22.352: INFO: Pod "pod-236ceb9d-1e72-4f03-bfa6-9b48cf22484b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044289752s Jan 28 21:20:24.358: INFO: Pod "pod-236ceb9d-1e72-4f03-bfa6-9b48cf22484b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050330672s Jan 28 21:20:26.365: INFO: Pod "pod-236ceb9d-1e72-4f03-bfa6-9b48cf22484b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.057654655s STEP: Saw pod success Jan 28 21:20:26.366: INFO: Pod "pod-236ceb9d-1e72-4f03-bfa6-9b48cf22484b" satisfied condition "success or failure" Jan 28 21:20:26.369: INFO: Trying to get logs from node jerma-node pod pod-236ceb9d-1e72-4f03-bfa6-9b48cf22484b container test-container: STEP: delete the pod Jan 28 21:20:26.607: INFO: Waiting for pod pod-236ceb9d-1e72-4f03-bfa6-9b48cf22484b to disappear Jan 28 21:20:26.614: INFO: Pod pod-236ceb9d-1e72-4f03-bfa6-9b48cf22484b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:20:26.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-999" for this suite. • [SLOW TEST:8.487 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":25,"skipped":524,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:20:26.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-d10defee-d706-4df7-8d5a-4e29de8919a8 STEP: Creating a pod to test consume secrets Jan 28 21:20:26.741: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-84b50b52-e0cd-4315-af72-343b743afb8e" in namespace "projected-8818" to be "success or failure" Jan 28 21:20:26.747: INFO: Pod "pod-projected-secrets-84b50b52-e0cd-4315-af72-343b743afb8e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.482957ms Jan 28 21:20:28.754: INFO: Pod "pod-projected-secrets-84b50b52-e0cd-4315-af72-343b743afb8e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013738224s Jan 28 21:20:30.764: INFO: Pod "pod-projected-secrets-84b50b52-e0cd-4315-af72-343b743afb8e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023400436s Jan 28 21:20:32.771: INFO: Pod "pod-projected-secrets-84b50b52-e0cd-4315-af72-343b743afb8e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.029968355s Jan 28 21:20:34.778: INFO: Pod "pod-projected-secrets-84b50b52-e0cd-4315-af72-343b743afb8e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.037454724s Jan 28 21:20:36.789: INFO: Pod "pod-projected-secrets-84b50b52-e0cd-4315-af72-343b743afb8e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.048387729s STEP: Saw pod success Jan 28 21:20:36.789: INFO: Pod "pod-projected-secrets-84b50b52-e0cd-4315-af72-343b743afb8e" satisfied condition "success or failure" Jan 28 21:20:36.795: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-84b50b52-e0cd-4315-af72-343b743afb8e container projected-secret-volume-test: STEP: delete the pod Jan 28 21:20:36.885: INFO: Waiting for pod pod-projected-secrets-84b50b52-e0cd-4315-af72-343b743afb8e to disappear Jan 28 21:20:36.894: INFO: Pod pod-projected-secrets-84b50b52-e0cd-4315-af72-343b743afb8e no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:20:36.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8818" for this suite. • [SLOW TEST:10.314 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":26,"skipped":532,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:20:36.938: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0128 21:20:39.746254 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 28 21:20:39.746: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:20:39.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6352" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":27,"skipped":544,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:20:39.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Jan 28 21:20:40.628: INFO: Created pod &Pod{ObjectMeta:{dns-7663 dns-7663 /api/v1/namespaces/dns-7663/pods/dns-7663 0083b46b-2997-4157-a352-40d227abcebb 4958932 0 2020-01-28 21:20:40 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-csv6n,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-csv6n,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-csv6n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: Verifying customized DNS suffix list is configured on pod... Jan 28 21:20:50.682: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-7663 PodName:dns-7663 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 28 21:20:50.682: INFO: >>> kubeConfig: /root/.kube/config I0128 21:20:50.740446 8 log.go:172] (0xc0029dc580) (0xc0025d7680) Create stream I0128 21:20:50.740567 8 log.go:172] (0xc0029dc580) (0xc0025d7680) Stream added, broadcasting: 1 I0128 21:20:50.747225 8 log.go:172] (0xc0029dc580) Reply frame received for 1 I0128 21:20:50.747280 8 log.go:172] (0xc0029dc580) (0xc0024cb4a0) Create stream I0128 21:20:50.747289 8 log.go:172] (0xc0029dc580) (0xc0024cb4a0) Stream added, broadcasting: 3 I0128 21:20:50.749038 8 log.go:172] (0xc0029dc580) Reply frame received for 3 I0128 21:20:50.749080 8 log.go:172] (0xc0029dc580) (0xc002446000) Create stream I0128 21:20:50.749093 8 log.go:172] (0xc0029dc580) (0xc002446000) Stream added, broadcasting: 5 I0128 21:20:50.750764 8 log.go:172] (0xc0029dc580) Reply frame received for 5 I0128 21:20:50.874472 8 log.go:172] (0xc0029dc580) Data frame received for 3 I0128 21:20:50.874675 8 log.go:172] (0xc0024cb4a0) (3) Data frame handling I0128 21:20:50.874735 8 log.go:172] (0xc0024cb4a0) (3) Data frame sent I0128 21:20:50.971181 8 log.go:172] (0xc0029dc580) (0xc0024cb4a0) Stream removed, broadcasting: 3 I0128 21:20:50.971671 8 log.go:172] (0xc0029dc580) Data frame received for 1 I0128 21:20:50.971713 8 log.go:172] (0xc0025d7680) (1) Data frame handling I0128 21:20:50.971744 8 log.go:172] (0xc0025d7680) (1) Data frame sent I0128 21:20:50.972156 8 log.go:172] (0xc0029dc580) (0xc0025d7680) Stream removed, broadcasting: 1 I0128 21:20:50.973873 8 log.go:172] (0xc0029dc580) (0xc002446000) Stream removed, broadcasting: 5 I0128 21:20:50.973975 8 log.go:172] (0xc0029dc580) Go away received I0128 21:20:50.974113 8 log.go:172] (0xc0029dc580) (0xc0025d7680) Stream removed, broadcasting: 1 I0128 21:20:50.974592 8 log.go:172] (0xc0029dc580) (0xc0024cb4a0) Stream removed, broadcasting: 3 I0128 21:20:50.974612 8 log.go:172] (0xc0029dc580) (0xc002446000) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Jan 28 21:20:50.974: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-7663 PodName:dns-7663 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 28 21:20:50.975: INFO: >>> kubeConfig: /root/.kube/config I0128 21:20:51.033520 8 log.go:172] (0xc000ffc630) (0xc0024cb720) Create stream I0128 21:20:51.033804 8 log.go:172] (0xc000ffc630) (0xc0024cb720) Stream added, broadcasting: 1 I0128 21:20:51.043608 8 log.go:172] (0xc000ffc630) Reply frame received for 1 I0128 21:20:51.043699 8 log.go:172] (0xc000ffc630) (0xc002446140) Create stream I0128 21:20:51.043717 8 log.go:172] (0xc000ffc630) (0xc002446140) Stream added, broadcasting: 3 I0128 21:20:51.045612 8 log.go:172] (0xc000ffc630) Reply frame received for 3 I0128 21:20:51.045669 8 log.go:172] (0xc000ffc630) (0xc00278b9a0) Create stream I0128 21:20:51.045675 8 log.go:172] (0xc000ffc630) (0xc00278b9a0) Stream added, broadcasting: 5 I0128 21:20:51.047819 8 log.go:172] (0xc000ffc630) Reply frame received for 5 I0128 21:20:51.136640 8 log.go:172] (0xc000ffc630) Data frame received for 3 I0128 21:20:51.136760 8 log.go:172] (0xc002446140) (3) Data frame handling I0128 21:20:51.136801 8 log.go:172] (0xc002446140) (3) Data frame sent I0128 21:20:51.220016 8 log.go:172] (0xc000ffc630) (0xc002446140) Stream removed, broadcasting: 3 I0128 21:20:51.220237 8 log.go:172] (0xc000ffc630) Data frame received for 1 I0128 21:20:51.220264 8 log.go:172] (0xc0024cb720) (1) Data frame handling I0128 21:20:51.220290 8 log.go:172] (0xc0024cb720) (1) Data frame sent I0128 21:20:51.220301 8 log.go:172] (0xc000ffc630) (0xc0024cb720) Stream removed, broadcasting: 1 I0128 21:20:51.220390 8 log.go:172] (0xc000ffc630) (0xc00278b9a0) Stream removed, broadcasting: 5 I0128 21:20:51.220545 8 log.go:172] (0xc000ffc630) Go away received I0128 21:20:51.220764 8 log.go:172] (0xc000ffc630) (0xc0024cb720) Stream removed, broadcasting: 1 I0128 21:20:51.220777 8 log.go:172] (0xc000ffc630) (0xc002446140) Stream removed, broadcasting: 3 I0128 21:20:51.220787 8 log.go:172] (0xc000ffc630) (0xc00278b9a0) Stream removed, broadcasting: 5 Jan 28 21:20:51.220: INFO: Deleting pod dns-7663... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:20:51.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7663" for this suite. • [SLOW TEST:11.554 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":28,"skipped":558,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:20:51.313: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Jan 28 21:21:03.957: INFO: Successfully updated pod "adopt-release-8jsp5" STEP: Checking that the Job readopts the Pod Jan 28 21:21:03.957: INFO: Waiting up to 15m0s for pod "adopt-release-8jsp5" in namespace "job-432" to be "adopted" Jan 28 21:21:03.980: INFO: Pod "adopt-release-8jsp5": Phase="Running", Reason="", readiness=true. Elapsed: 22.151719ms Jan 28 21:21:06.004: INFO: Pod "adopt-release-8jsp5": Phase="Running", Reason="", readiness=true. Elapsed: 2.046444177s Jan 28 21:21:06.005: INFO: Pod "adopt-release-8jsp5" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Jan 28 21:21:06.528: INFO: Successfully updated pod "adopt-release-8jsp5" STEP: Checking that the Job releases the Pod Jan 28 21:21:06.529: INFO: Waiting up to 15m0s for pod "adopt-release-8jsp5" in namespace "job-432" to be "released" Jan 28 21:21:06.537: INFO: Pod "adopt-release-8jsp5": Phase="Running", Reason="", readiness=true. Elapsed: 8.038817ms Jan 28 21:21:06.537: INFO: Pod "adopt-release-8jsp5" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:21:06.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-432" for this suite. • [SLOW TEST:15.290 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":29,"skipped":569,"failed":0} S ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:21:06.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jan 28 21:21:06.780: INFO: Number of nodes with available pods: 0 Jan 28 21:21:06.780: INFO: Node jerma-node is running more than one daemon pod Jan 28 21:21:09.068: INFO: Number of nodes with available pods: 0 Jan 28 21:21:09.069: INFO: Node jerma-node is running more than one daemon pod Jan 28 21:21:09.828: INFO: Number of nodes with available pods: 0 Jan 28 21:21:09.829: INFO: Node jerma-node is running more than one daemon pod Jan 28 21:21:10.797: INFO: Number of nodes with available pods: 0 Jan 28 21:21:10.797: INFO: Node jerma-node is running more than one daemon pod Jan 28 21:21:11.825: INFO: Number of nodes with available pods: 0 Jan 28 21:21:11.826: INFO: Node jerma-node is running more than one daemon pod Jan 28 21:21:13.788: INFO: Number of nodes with available pods: 0 Jan 28 21:21:13.789: INFO: Node jerma-node is running more than one daemon pod Jan 28 21:21:15.440: INFO: Number of nodes with available pods: 0 Jan 28 21:21:15.440: INFO: Node jerma-node is running more than one daemon pod Jan 28 21:21:15.939: INFO: Number of nodes with available pods: 0 Jan 28 21:21:15.939: INFO: Node jerma-node is running more than one daemon pod Jan 28 21:21:16.790: INFO: Number of nodes with available pods: 0 Jan 28 21:21:16.790: INFO: Node jerma-node is running more than one daemon pod Jan 28 21:21:17.798: INFO: Number of nodes with available pods: 1 Jan 28 21:21:17.799: INFO: Node jerma-node is running more than one daemon pod Jan 28 21:21:18.789: INFO: Number of nodes with available pods: 2 Jan 28 21:21:18.790: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Jan 28 21:21:18.831: INFO: Number of nodes with available pods: 2 Jan 28 21:21:18.831: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1994, will wait for the garbage collector to delete the pods Jan 28 21:21:19.962: INFO: Deleting DaemonSet.extensions daemon-set took: 23.04224ms Jan 28 21:21:20.062: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.519986ms Jan 28 21:21:33.173: INFO: Number of nodes with available pods: 0 Jan 28 21:21:33.173: INFO: Number of running nodes: 0, number of available pods: 0 Jan 28 21:21:33.185: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1994/daemonsets","resourceVersion":"4959193"},"items":null} Jan 28 21:21:33.190: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1994/pods","resourceVersion":"4959193"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:21:33.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1994" for this suite. • [SLOW TEST:26.618 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":30,"skipped":570,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:21:33.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:21:46.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3671" for this suite. • [SLOW TEST:13.450 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":31,"skipped":585,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:21:46.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override command Jan 28 21:21:46.763: INFO: Waiting up to 5m0s for pod "client-containers-3d72c526-dd5e-488a-8661-adef12e1c2b1" in namespace "containers-4640" to be "success or failure" Jan 28 21:21:46.811: INFO: Pod "client-containers-3d72c526-dd5e-488a-8661-adef12e1c2b1": Phase="Pending", Reason="", readiness=false. Elapsed: 48.587371ms Jan 28 21:21:48.820: INFO: Pod "client-containers-3d72c526-dd5e-488a-8661-adef12e1c2b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056749422s Jan 28 21:21:50.828: INFO: Pod "client-containers-3d72c526-dd5e-488a-8661-adef12e1c2b1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065310296s Jan 28 21:21:52.841: INFO: Pod "client-containers-3d72c526-dd5e-488a-8661-adef12e1c2b1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.078227439s Jan 28 21:21:55.673: INFO: Pod "client-containers-3d72c526-dd5e-488a-8661-adef12e1c2b1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.910445884s Jan 28 21:21:57.680: INFO: Pod "client-containers-3d72c526-dd5e-488a-8661-adef12e1c2b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.91716346s STEP: Saw pod success Jan 28 21:21:57.680: INFO: Pod "client-containers-3d72c526-dd5e-488a-8661-adef12e1c2b1" satisfied condition "success or failure" Jan 28 21:21:57.684: INFO: Trying to get logs from node jerma-node pod client-containers-3d72c526-dd5e-488a-8661-adef12e1c2b1 container test-container: STEP: delete the pod Jan 28 21:21:57.723: INFO: Waiting for pod client-containers-3d72c526-dd5e-488a-8661-adef12e1c2b1 to disappear Jan 28 21:21:57.752: INFO: Pod client-containers-3d72c526-dd5e-488a-8661-adef12e1c2b1 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:21:57.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4640" for this suite. • [SLOW TEST:11.088 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":32,"skipped":604,"failed":0} S ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:21:57.762: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service nodeport-service with the type=NodePort in namespace services-2325 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-2325 STEP: creating replication controller externalsvc in namespace services-2325 I0128 21:21:58.067686 8 runners.go:189] Created replication controller with name: externalsvc, namespace: services-2325, replica count: 2 I0128 21:22:01.119117 8 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0128 21:22:04.119550 8 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0128 21:22:07.120305 8 runners.go:189] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0128 21:22:10.120839 8 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Jan 28 21:22:10.249: INFO: Creating new exec pod Jan 28 21:22:18.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2325 execpodbhf7h -- /bin/sh -x -c nslookup nodeport-service' Jan 28 21:22:20.888: INFO: stderr: "I0128 21:22:20.604751 672 log.go:172] (0xc000bc3290) (0xc0004943c0) Create stream\nI0128 21:22:20.605107 672 log.go:172] (0xc000bc3290) (0xc0004943c0) Stream added, broadcasting: 1\nI0128 21:22:20.628771 672 log.go:172] (0xc000bc3290) Reply frame received for 1\nI0128 21:22:20.629162 672 log.go:172] (0xc000bc3290) (0xc0006a6780) Create stream\nI0128 21:22:20.629242 672 log.go:172] (0xc000bc3290) (0xc0006a6780) Stream added, broadcasting: 3\nI0128 21:22:20.632167 672 log.go:172] (0xc000bc3290) Reply frame received for 3\nI0128 21:22:20.632352 672 log.go:172] (0xc000bc3290) (0xc0004f9540) Create stream\nI0128 21:22:20.632398 672 log.go:172] (0xc000bc3290) (0xc0004f9540) Stream added, broadcasting: 5\nI0128 21:22:20.635434 672 log.go:172] (0xc000bc3290) Reply frame received for 5\nI0128 21:22:20.743183 672 log.go:172] (0xc000bc3290) Data frame received for 5\nI0128 21:22:20.743350 672 log.go:172] (0xc0004f9540) (5) Data frame handling\nI0128 21:22:20.743386 672 log.go:172] (0xc0004f9540) (5) Data frame sent\n+ nslookup nodeport-service\nI0128 21:22:20.789487 672 log.go:172] (0xc000bc3290) Data frame received for 3\nI0128 21:22:20.789663 672 log.go:172] (0xc0006a6780) (3) Data frame handling\nI0128 21:22:20.789707 672 log.go:172] (0xc0006a6780) (3) Data frame sent\nI0128 21:22:20.790706 672 log.go:172] (0xc000bc3290) Data frame received for 3\nI0128 21:22:20.790728 672 log.go:172] (0xc0006a6780) (3) Data frame handling\nI0128 21:22:20.790748 672 log.go:172] (0xc0006a6780) (3) Data frame sent\nI0128 21:22:20.869879 672 log.go:172] (0xc000bc3290) Data frame received for 1\nI0128 21:22:20.870162 672 log.go:172] (0xc000bc3290) (0xc0006a6780) Stream removed, broadcasting: 3\nI0128 21:22:20.870243 672 log.go:172] (0xc0004943c0) (1) Data frame handling\nI0128 21:22:20.870263 672 log.go:172] (0xc0004943c0) (1) Data frame sent\nI0128 21:22:20.870312 672 log.go:172] (0xc000bc3290) (0xc0004943c0) Stream removed, broadcasting: 1\nI0128 21:22:20.871533 672 log.go:172] (0xc000bc3290) (0xc0004f9540) Stream removed, broadcasting: 5\nI0128 21:22:20.871831 672 log.go:172] (0xc000bc3290) Go away received\nI0128 21:22:20.872057 672 log.go:172] (0xc000bc3290) (0xc0004943c0) Stream removed, broadcasting: 1\nI0128 21:22:20.872089 672 log.go:172] (0xc000bc3290) (0xc0006a6780) Stream removed, broadcasting: 3\nI0128 21:22:20.872104 672 log.go:172] (0xc000bc3290) (0xc0004f9540) Stream removed, broadcasting: 5\n" Jan 28 21:22:20.888: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-2325.svc.cluster.local\tcanonical name = externalsvc.services-2325.svc.cluster.local.\nName:\texternalsvc.services-2325.svc.cluster.local\nAddress: 10.96.61.192\n\n" STEP: deleting ReplicationController externalsvc in namespace services-2325, will wait for the garbage collector to delete the pods Jan 28 21:22:20.952: INFO: Deleting ReplicationController externalsvc took: 7.289582ms Jan 28 21:22:21.352: INFO: Terminating ReplicationController externalsvc pods took: 400.478899ms Jan 28 21:22:32.427: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:22:32.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2325" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:34.702 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":33,"skipped":605,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:22:32.464: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 28 21:22:32.563: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Jan 28 21:22:35.598: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:22:35.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5541" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":34,"skipped":615,"failed":0} SSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:22:35.670: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2678.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-2678.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2678.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-2678.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 28 21:22:56.883: INFO: DNS probes using dns-test-72e53895-e2dd-4a9c-8f93-69d0ce5cb729 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2678.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-2678.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2678.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-2678.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 28 21:23:11.119: INFO: File wheezy_udp@dns-test-service-3.dns-2678.svc.cluster.local from pod dns-2678/dns-test-d24c4b92-e62b-44d8-8fd1-490d8c1b204d contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 28 21:23:11.123: INFO: File jessie_udp@dns-test-service-3.dns-2678.svc.cluster.local from pod dns-2678/dns-test-d24c4b92-e62b-44d8-8fd1-490d8c1b204d contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 28 21:23:11.123: INFO: Lookups using dns-2678/dns-test-d24c4b92-e62b-44d8-8fd1-490d8c1b204d failed for: [wheezy_udp@dns-test-service-3.dns-2678.svc.cluster.local jessie_udp@dns-test-service-3.dns-2678.svc.cluster.local] Jan 28 21:23:16.137: INFO: File wheezy_udp@dns-test-service-3.dns-2678.svc.cluster.local from pod dns-2678/dns-test-d24c4b92-e62b-44d8-8fd1-490d8c1b204d contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 28 21:23:16.144: INFO: File jessie_udp@dns-test-service-3.dns-2678.svc.cluster.local from pod dns-2678/dns-test-d24c4b92-e62b-44d8-8fd1-490d8c1b204d contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 28 21:23:16.144: INFO: Lookups using dns-2678/dns-test-d24c4b92-e62b-44d8-8fd1-490d8c1b204d failed for: [wheezy_udp@dns-test-service-3.dns-2678.svc.cluster.local jessie_udp@dns-test-service-3.dns-2678.svc.cluster.local] Jan 28 21:23:21.135: INFO: File wheezy_udp@dns-test-service-3.dns-2678.svc.cluster.local from pod dns-2678/dns-test-d24c4b92-e62b-44d8-8fd1-490d8c1b204d contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 28 21:23:21.142: INFO: File jessie_udp@dns-test-service-3.dns-2678.svc.cluster.local from pod dns-2678/dns-test-d24c4b92-e62b-44d8-8fd1-490d8c1b204d contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 28 21:23:21.142: INFO: Lookups using dns-2678/dns-test-d24c4b92-e62b-44d8-8fd1-490d8c1b204d failed for: [wheezy_udp@dns-test-service-3.dns-2678.svc.cluster.local jessie_udp@dns-test-service-3.dns-2678.svc.cluster.local] Jan 28 21:23:26.146: INFO: DNS probes using dns-test-d24c4b92-e62b-44d8-8fd1-490d8c1b204d succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2678.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-2678.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2678.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-2678.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 28 21:23:40.468: INFO: DNS probes using dns-test-2f961e23-4258-4599-9568-d17ea27cac16 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:23:40.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2678" for this suite. • [SLOW TEST:64.974 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":35,"skipped":622,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:23:40.646: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:23:45.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8111" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":36,"skipped":636,"failed":0} SSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:23:45.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 28 21:23:45.515: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:23:53.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2929" for this suite. • [SLOW TEST:8.256 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":37,"skipped":640,"failed":0} S ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:23:53.637: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 28 21:23:53.725: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-7464 I0128 21:23:53.741027 8 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-7464, replica count: 1 I0128 21:23:54.791906 8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0128 21:23:55.792508 8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0128 21:23:56.793021 8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0128 21:23:57.793557 8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0128 21:23:58.794619 8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0128 21:23:59.795228 8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0128 21:24:00.795792 8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 28 21:24:00.969: INFO: Created: latency-svc-4wxgb Jan 28 21:24:00.981: INFO: Got endpoints: latency-svc-4wxgb [85.19531ms] Jan 28 21:24:01.121: INFO: Created: latency-svc-w6zmn Jan 28 21:24:01.133: INFO: Got endpoints: latency-svc-w6zmn [151.190873ms] Jan 28 21:24:01.180: INFO: Created: latency-svc-m6t66 Jan 28 21:24:01.192: INFO: Got endpoints: latency-svc-m6t66 [209.167102ms] Jan 28 21:24:01.267: INFO: Created: latency-svc-xrbc7 Jan 28 21:24:01.329: INFO: Got endpoints: latency-svc-xrbc7 [346.445728ms] Jan 28 21:24:01.423: INFO: Created: latency-svc-456dm Jan 28 21:24:01.471: INFO: Got endpoints: latency-svc-456dm [489.432673ms] Jan 28 21:24:01.473: INFO: Created: latency-svc-hzts4 Jan 28 21:24:01.487: INFO: Got endpoints: latency-svc-hzts4 [505.332057ms] Jan 28 21:24:01.676: INFO: Created: latency-svc-rgsj9 Jan 28 21:24:01.697: INFO: Got endpoints: latency-svc-rgsj9 [714.389112ms] Jan 28 21:24:01.895: INFO: Created: latency-svc-mspb8 Jan 28 21:24:01.901: INFO: Got endpoints: latency-svc-mspb8 [917.907391ms] Jan 28 21:24:01.948: INFO: Created: latency-svc-b6pbm Jan 28 21:24:02.086: INFO: Got endpoints: latency-svc-b6pbm [1.102793667s] Jan 28 21:24:02.140: INFO: Created: latency-svc-wdv4c Jan 28 21:24:02.173: INFO: Got endpoints: latency-svc-wdv4c [1.189539838s] Jan 28 21:24:02.175: INFO: Created: latency-svc-vrlr4 Jan 28 21:24:02.186: INFO: Got endpoints: latency-svc-vrlr4 [1.203185524s] Jan 28 21:24:02.337: INFO: Created: latency-svc-6fvhh Jan 28 21:24:02.345: INFO: Got endpoints: latency-svc-6fvhh [1.363875316s] Jan 28 21:24:02.367: INFO: Created: latency-svc-2cbg8 Jan 28 21:24:02.379: INFO: Got endpoints: latency-svc-2cbg8 [1.396286075s] Jan 28 21:24:02.431: INFO: Created: latency-svc-zrm8z Jan 28 21:24:02.545: INFO: Got endpoints: latency-svc-zrm8z [1.562179296s] Jan 28 21:24:02.581: INFO: Created: latency-svc-kpttc Jan 28 21:24:02.590: INFO: Got endpoints: latency-svc-kpttc [1.607366305s] Jan 28 21:24:02.629: INFO: Created: latency-svc-75h6d Jan 28 21:24:02.723: INFO: Got endpoints: latency-svc-75h6d [1.740189213s] Jan 28 21:24:02.725: INFO: Created: latency-svc-zhmtw Jan 28 21:24:02.743: INFO: Got endpoints: latency-svc-zhmtw [1.610005926s] Jan 28 21:24:02.763: INFO: Created: latency-svc-wg5gw Jan 28 21:24:02.779: INFO: Created: latency-svc-mw4bx Jan 28 21:24:02.779: INFO: Got endpoints: latency-svc-wg5gw [1.587185555s] Jan 28 21:24:02.782: INFO: Got endpoints: latency-svc-mw4bx [1.453028718s] Jan 28 21:24:02.798: INFO: Created: latency-svc-sm8kb Jan 28 21:24:02.812: INFO: Got endpoints: latency-svc-sm8kb [1.34049568s] Jan 28 21:24:02.888: INFO: Created: latency-svc-7shh6 Jan 28 21:24:02.893: INFO: Got endpoints: latency-svc-7shh6 [1.405933482s] Jan 28 21:24:02.926: INFO: Created: latency-svc-hwkld Jan 28 21:24:02.932: INFO: Got endpoints: latency-svc-hwkld [1.234775606s] Jan 28 21:24:02.976: INFO: Created: latency-svc-6s8jl Jan 28 21:24:03.093: INFO: Got endpoints: latency-svc-6s8jl [1.192264728s] Jan 28 21:24:03.107: INFO: Created: latency-svc-hxlqd Jan 28 21:24:03.127: INFO: Got endpoints: latency-svc-hxlqd [1.041182021s] Jan 28 21:24:03.161: INFO: Created: latency-svc-2m8lf Jan 28 21:24:03.174: INFO: Got endpoints: latency-svc-2m8lf [1.001194955s] Jan 28 21:24:03.319: INFO: Created: latency-svc-2jkn8 Jan 28 21:24:03.366: INFO: Got endpoints: latency-svc-2jkn8 [1.180134502s] Jan 28 21:24:03.370: INFO: Created: latency-svc-ngn7v Jan 28 21:24:03.409: INFO: Got endpoints: latency-svc-ngn7v [1.063123991s] Jan 28 21:24:03.464: INFO: Created: latency-svc-x8j74 Jan 28 21:24:03.469: INFO: Got endpoints: latency-svc-x8j74 [1.089022979s] Jan 28 21:24:03.509: INFO: Created: latency-svc-gbc8k Jan 28 21:24:03.511: INFO: Got endpoints: latency-svc-gbc8k [965.113557ms] Jan 28 21:24:03.553: INFO: Created: latency-svc-wrvdf Jan 28 21:24:03.555: INFO: Got endpoints: latency-svc-wrvdf [964.063759ms] Jan 28 21:24:03.599: INFO: Created: latency-svc-wkqtr Jan 28 21:24:03.601: INFO: Got endpoints: latency-svc-wkqtr [877.836662ms] Jan 28 21:24:03.633: INFO: Created: latency-svc-d7lcn Jan 28 21:24:03.639: INFO: Got endpoints: latency-svc-d7lcn [896.264347ms] Jan 28 21:24:03.653: INFO: Created: latency-svc-bpfvc Jan 28 21:24:03.660: INFO: Got endpoints: latency-svc-bpfvc [880.687788ms] Jan 28 21:24:03.742: INFO: Created: latency-svc-4nb47 Jan 28 21:24:03.769: INFO: Got endpoints: latency-svc-4nb47 [986.946427ms] Jan 28 21:24:03.773: INFO: Created: latency-svc-d5m2h Jan 28 21:24:03.778: INFO: Got endpoints: latency-svc-d5m2h [965.854442ms] Jan 28 21:24:03.826: INFO: Created: latency-svc-rg8v8 Jan 28 21:24:03.829: INFO: Got endpoints: latency-svc-rg8v8 [935.160871ms] Jan 28 21:24:03.913: INFO: Created: latency-svc-n7zjf Jan 28 21:24:03.939: INFO: Got endpoints: latency-svc-n7zjf [1.006681083s] Jan 28 21:24:03.943: INFO: Created: latency-svc-mmln6 Jan 28 21:24:03.961: INFO: Got endpoints: latency-svc-mmln6 [867.795605ms] Jan 28 21:24:03.989: INFO: Created: latency-svc-2h24g Jan 28 21:24:04.004: INFO: Got endpoints: latency-svc-2h24g [877.058006ms] Jan 28 21:24:04.059: INFO: Created: latency-svc-99mmr Jan 28 21:24:04.082: INFO: Created: latency-svc-b4g68 Jan 28 21:24:04.082: INFO: Got endpoints: latency-svc-99mmr [908.014364ms] Jan 28 21:24:04.103: INFO: Got endpoints: latency-svc-b4g68 [736.386419ms] Jan 28 21:24:04.105: INFO: Created: latency-svc-j2vtn Jan 28 21:24:04.120: INFO: Got endpoints: latency-svc-j2vtn [711.364582ms] Jan 28 21:24:04.138: INFO: Created: latency-svc-rqhsd Jan 28 21:24:04.157: INFO: Got endpoints: latency-svc-rqhsd [74.573378ms] Jan 28 21:24:04.157: INFO: Created: latency-svc-rgkk2 Jan 28 21:24:04.227: INFO: Created: latency-svc-fln8s Jan 28 21:24:04.227: INFO: Got endpoints: latency-svc-rgkk2 [758.467664ms] Jan 28 21:24:04.235: INFO: Got endpoints: latency-svc-fln8s [723.685652ms] Jan 28 21:24:04.259: INFO: Created: latency-svc-hl4fs Jan 28 21:24:04.264: INFO: Got endpoints: latency-svc-hl4fs [709.280285ms] Jan 28 21:24:04.292: INFO: Created: latency-svc-wjnb9 Jan 28 21:24:04.296: INFO: Got endpoints: latency-svc-wjnb9 [695.545621ms] Jan 28 21:24:04.321: INFO: Created: latency-svc-b8rjs Jan 28 21:24:04.377: INFO: Got endpoints: latency-svc-b8rjs [737.119693ms] Jan 28 21:24:04.392: INFO: Created: latency-svc-xwlmc Jan 28 21:24:04.398: INFO: Got endpoints: latency-svc-xwlmc [737.266391ms] Jan 28 21:24:04.462: INFO: Created: latency-svc-ldkp4 Jan 28 21:24:04.472: INFO: Got endpoints: latency-svc-ldkp4 [702.316855ms] Jan 28 21:24:04.636: INFO: Created: latency-svc-8cxnd Jan 28 21:24:04.637: INFO: Created: latency-svc-lcrzg Jan 28 21:24:04.660: INFO: Got endpoints: latency-svc-lcrzg [881.798209ms] Jan 28 21:24:04.662: INFO: Got endpoints: latency-svc-8cxnd [833.131078ms] Jan 28 21:24:04.682: INFO: Created: latency-svc-vrmwn Jan 28 21:24:04.746: INFO: Got endpoints: latency-svc-vrmwn [806.653483ms] Jan 28 21:24:04.749: INFO: Created: latency-svc-r5pbs Jan 28 21:24:04.771: INFO: Got endpoints: latency-svc-r5pbs [809.728832ms] Jan 28 21:24:04.777: INFO: Created: latency-svc-hp97n Jan 28 21:24:04.793: INFO: Got endpoints: latency-svc-hp97n [788.625004ms] Jan 28 21:24:04.827: INFO: Created: latency-svc-cjwfg Jan 28 21:24:04.833: INFO: Got endpoints: latency-svc-cjwfg [730.165581ms] Jan 28 21:24:04.879: INFO: Created: latency-svc-jxxr8 Jan 28 21:24:04.902: INFO: Got endpoints: latency-svc-jxxr8 [781.30125ms] Jan 28 21:24:04.906: INFO: Created: latency-svc-5smp5 Jan 28 21:24:04.940: INFO: Got endpoints: latency-svc-5smp5 [782.877292ms] Jan 28 21:24:04.953: INFO: Created: latency-svc-w97kv Jan 28 21:24:04.956: INFO: Got endpoints: latency-svc-w97kv [729.194116ms] Jan 28 21:24:05.024: INFO: Created: latency-svc-85htp Jan 28 21:24:05.075: INFO: Created: latency-svc-pd5lm Jan 28 21:24:05.075: INFO: Got endpoints: latency-svc-85htp [840.646338ms] Jan 28 21:24:05.183: INFO: Got endpoints: latency-svc-pd5lm [918.182145ms] Jan 28 21:24:05.235: INFO: Created: latency-svc-kh5qc Jan 28 21:24:05.283: INFO: Got endpoints: latency-svc-kh5qc [986.49551ms] Jan 28 21:24:05.284: INFO: Created: latency-svc-m8dq6 Jan 28 21:24:05.374: INFO: Got endpoints: latency-svc-m8dq6 [996.445239ms] Jan 28 21:24:05.380: INFO: Created: latency-svc-7fzt5 Jan 28 21:24:05.393: INFO: Got endpoints: latency-svc-7fzt5 [995.283656ms] Jan 28 21:24:05.422: INFO: Created: latency-svc-mr8fs Jan 28 21:24:05.427: INFO: Got endpoints: latency-svc-mr8fs [954.481543ms] Jan 28 21:24:05.454: INFO: Created: latency-svc-g7bgd Jan 28 21:24:05.457: INFO: Got endpoints: latency-svc-g7bgd [796.847526ms] Jan 28 21:24:05.520: INFO: Created: latency-svc-ntmr6 Jan 28 21:24:05.545: INFO: Created: latency-svc-8zn7j Jan 28 21:24:05.545: INFO: Got endpoints: latency-svc-ntmr6 [882.939269ms] Jan 28 21:24:05.559: INFO: Got endpoints: latency-svc-8zn7j [812.819962ms] Jan 28 21:24:05.584: INFO: Created: latency-svc-4ct5r Jan 28 21:24:05.590: INFO: Got endpoints: latency-svc-4ct5r [818.382276ms] Jan 28 21:24:05.610: INFO: Created: latency-svc-cqtkz Jan 28 21:24:05.654: INFO: Got endpoints: latency-svc-cqtkz [860.686666ms] Jan 28 21:24:05.676: INFO: Created: latency-svc-wktwv Jan 28 21:24:05.686: INFO: Got endpoints: latency-svc-wktwv [852.552059ms] Jan 28 21:24:05.712: INFO: Created: latency-svc-jsr4p Jan 28 21:24:05.717: INFO: Got endpoints: latency-svc-jsr4p [814.813669ms] Jan 28 21:24:05.740: INFO: Created: latency-svc-wt7pt Jan 28 21:24:05.780: INFO: Got endpoints: latency-svc-wt7pt [839.694748ms] Jan 28 21:24:05.789: INFO: Created: latency-svc-qnn9n Jan 28 21:24:05.793: INFO: Got endpoints: latency-svc-qnn9n [836.993456ms] Jan 28 21:24:05.817: INFO: Created: latency-svc-5dvhv Jan 28 21:24:05.818: INFO: Got endpoints: latency-svc-5dvhv [742.067102ms] Jan 28 21:24:05.847: INFO: Created: latency-svc-czcqn Jan 28 21:24:05.863: INFO: Got endpoints: latency-svc-czcqn [680.706293ms] Jan 28 21:24:05.907: INFO: Created: latency-svc-4fhs4 Jan 28 21:24:05.920: INFO: Got endpoints: latency-svc-4fhs4 [636.411928ms] Jan 28 21:24:05.939: INFO: Created: latency-svc-5hxfn Jan 28 21:24:05.944: INFO: Got endpoints: latency-svc-5hxfn [569.878652ms] Jan 28 21:24:05.976: INFO: Created: latency-svc-n5h72 Jan 28 21:24:05.984: INFO: Got endpoints: latency-svc-n5h72 [590.510155ms] Jan 28 21:24:06.086: INFO: Created: latency-svc-8q7cs Jan 28 21:24:06.137: INFO: Created: latency-svc-8lljk Jan 28 21:24:06.137: INFO: Got endpoints: latency-svc-8q7cs [709.931027ms] Jan 28 21:24:06.140: INFO: Got endpoints: latency-svc-8lljk [683.130003ms] Jan 28 21:24:06.173: INFO: Created: latency-svc-kc689 Jan 28 21:24:06.184: INFO: Got endpoints: latency-svc-kc689 [638.198326ms] Jan 28 21:24:06.254: INFO: Created: latency-svc-cvdcd Jan 28 21:24:06.266: INFO: Got endpoints: latency-svc-cvdcd [706.003652ms] Jan 28 21:24:06.287: INFO: Created: latency-svc-285p7 Jan 28 21:24:06.291: INFO: Got endpoints: latency-svc-285p7 [700.722572ms] Jan 28 21:24:06.303: INFO: Created: latency-svc-ckpl6 Jan 28 21:24:06.318: INFO: Got endpoints: latency-svc-ckpl6 [663.856676ms] Jan 28 21:24:06.335: INFO: Created: latency-svc-fjgvw Jan 28 21:24:06.338: INFO: Got endpoints: latency-svc-fjgvw [652.201878ms] Jan 28 21:24:06.398: INFO: Created: latency-svc-5t86r Jan 28 21:24:06.404: INFO: Got endpoints: latency-svc-5t86r [687.355155ms] Jan 28 21:24:06.424: INFO: Created: latency-svc-fpb6l Jan 28 21:24:06.424: INFO: Got endpoints: latency-svc-fpb6l [643.839965ms] Jan 28 21:24:06.473: INFO: Created: latency-svc-zw8bd Jan 28 21:24:06.478: INFO: Got endpoints: latency-svc-zw8bd [684.453878ms] Jan 28 21:24:06.545: INFO: Created: latency-svc-lxcwc Jan 28 21:24:06.565: INFO: Got endpoints: latency-svc-lxcwc [747.130358ms] Jan 28 21:24:06.566: INFO: Created: latency-svc-zkfhn Jan 28 21:24:06.573: INFO: Got endpoints: latency-svc-zkfhn [708.80641ms] Jan 28 21:24:06.602: INFO: Created: latency-svc-5tzrg Jan 28 21:24:06.604: INFO: Got endpoints: latency-svc-5tzrg [683.869587ms] Jan 28 21:24:06.675: INFO: Created: latency-svc-h8rj6 Jan 28 21:24:06.697: INFO: Created: latency-svc-dt2f4 Jan 28 21:24:06.698: INFO: Got endpoints: latency-svc-h8rj6 [753.569586ms] Jan 28 21:24:06.703: INFO: Got endpoints: latency-svc-dt2f4 [719.168541ms] Jan 28 21:24:06.726: INFO: Created: latency-svc-ddmk5 Jan 28 21:24:06.733: INFO: Got endpoints: latency-svc-ddmk5 [595.920783ms] Jan 28 21:24:06.750: INFO: Created: latency-svc-4rdn5 Jan 28 21:24:06.757: INFO: Got endpoints: latency-svc-4rdn5 [616.659892ms] Jan 28 21:24:06.813: INFO: Created: latency-svc-jxlb4 Jan 28 21:24:06.826: INFO: Got endpoints: latency-svc-jxlb4 [642.479135ms] Jan 28 21:24:06.828: INFO: Created: latency-svc-dlsfq Jan 28 21:24:06.858: INFO: Got endpoints: latency-svc-dlsfq [591.97536ms] Jan 28 21:24:06.949: INFO: Created: latency-svc-kbfj4 Jan 28 21:24:06.952: INFO: Got endpoints: latency-svc-kbfj4 [660.815535ms] Jan 28 21:24:06.994: INFO: Created: latency-svc-5snfg Jan 28 21:24:06.995: INFO: Got endpoints: latency-svc-5snfg [676.664924ms] Jan 28 21:24:07.014: INFO: Created: latency-svc-bj6bn Jan 28 21:24:07.032: INFO: Got endpoints: latency-svc-bj6bn [693.68883ms] Jan 28 21:24:07.123: INFO: Created: latency-svc-c7xkk Jan 28 21:24:07.175: INFO: Created: latency-svc-wsn49 Jan 28 21:24:07.177: INFO: Got endpoints: latency-svc-c7xkk [772.564809ms] Jan 28 21:24:07.199: INFO: Got endpoints: latency-svc-wsn49 [774.276931ms] Jan 28 21:24:07.337: INFO: Created: latency-svc-tlq54 Jan 28 21:24:07.359: INFO: Got endpoints: latency-svc-tlq54 [880.997956ms] Jan 28 21:24:07.380: INFO: Created: latency-svc-zv5kx Jan 28 21:24:07.415: INFO: Got endpoints: latency-svc-zv5kx [850.188356ms] Jan 28 21:24:07.543: INFO: Created: latency-svc-qcb4b Jan 28 21:24:07.557: INFO: Got endpoints: latency-svc-qcb4b [984.209551ms] Jan 28 21:24:07.588: INFO: Created: latency-svc-lbp5t Jan 28 21:24:07.601: INFO: Got endpoints: latency-svc-lbp5t [997.098093ms] Jan 28 21:24:07.622: INFO: Created: latency-svc-9pnn6 Jan 28 21:24:07.625: INFO: Got endpoints: latency-svc-9pnn6 [927.221788ms] Jan 28 21:24:07.680: INFO: Created: latency-svc-x52bj Jan 28 21:24:07.685: INFO: Got endpoints: latency-svc-x52bj [982.475998ms] Jan 28 21:24:07.705: INFO: Created: latency-svc-5zwdj Jan 28 21:24:07.724: INFO: Got endpoints: latency-svc-5zwdj [991.272192ms] Jan 28 21:24:07.744: INFO: Created: latency-svc-z69vj Jan 28 21:24:07.764: INFO: Got endpoints: latency-svc-z69vj [1.006505533s] Jan 28 21:24:07.765: INFO: Created: latency-svc-kqqwf Jan 28 21:24:07.833: INFO: Got endpoints: latency-svc-kqqwf [1.006715516s] Jan 28 21:24:07.835: INFO: Created: latency-svc-qjbvt Jan 28 21:24:07.855: INFO: Got endpoints: latency-svc-qjbvt [997.281702ms] Jan 28 21:24:07.878: INFO: Created: latency-svc-c6cdg Jan 28 21:24:07.888: INFO: Got endpoints: latency-svc-c6cdg [936.406786ms] Jan 28 21:24:07.918: INFO: Created: latency-svc-xn7m7 Jan 28 21:24:08.007: INFO: Got endpoints: latency-svc-xn7m7 [1.011518563s] Jan 28 21:24:08.012: INFO: Created: latency-svc-gmbll Jan 28 21:24:08.027: INFO: Got endpoints: latency-svc-gmbll [995.058928ms] Jan 28 21:24:08.043: INFO: Created: latency-svc-b2xdq Jan 28 21:24:08.048: INFO: Got endpoints: latency-svc-b2xdq [870.42111ms] Jan 28 21:24:08.069: INFO: Created: latency-svc-2gpm7 Jan 28 21:24:08.070: INFO: Got endpoints: latency-svc-2gpm7 [871.78055ms] Jan 28 21:24:08.152: INFO: Created: latency-svc-vvrgl Jan 28 21:24:08.155: INFO: Got endpoints: latency-svc-vvrgl [794.855767ms] Jan 28 21:24:08.189: INFO: Created: latency-svc-f9vhg Jan 28 21:24:08.190: INFO: Got endpoints: latency-svc-f9vhg [774.355451ms] Jan 28 21:24:08.213: INFO: Created: latency-svc-jckjs Jan 28 21:24:08.218: INFO: Got endpoints: latency-svc-jckjs [660.339607ms] Jan 28 21:24:08.251: INFO: Created: latency-svc-nkfj9 Jan 28 21:24:08.337: INFO: Got endpoints: latency-svc-nkfj9 [735.383349ms] Jan 28 21:24:08.402: INFO: Created: latency-svc-r9f6m Jan 28 21:24:08.418: INFO: Got endpoints: latency-svc-r9f6m [793.577877ms] Jan 28 21:24:08.496: INFO: Created: latency-svc-vcz8j Jan 28 21:24:08.523: INFO: Got endpoints: latency-svc-vcz8j [837.62126ms] Jan 28 21:24:08.524: INFO: Created: latency-svc-d4677 Jan 28 21:24:08.548: INFO: Got endpoints: latency-svc-d4677 [823.78724ms] Jan 28 21:24:08.575: INFO: Created: latency-svc-897n2 Jan 28 21:24:08.623: INFO: Got endpoints: latency-svc-897n2 [858.80429ms] Jan 28 21:24:08.628: INFO: Created: latency-svc-jdlg5 Jan 28 21:24:08.639: INFO: Got endpoints: latency-svc-jdlg5 [805.451873ms] Jan 28 21:24:08.679: INFO: Created: latency-svc-2lvvs Jan 28 21:24:08.705: INFO: Got endpoints: latency-svc-2lvvs [849.372061ms] Jan 28 21:24:08.710: INFO: Created: latency-svc-42tsz Jan 28 21:24:08.757: INFO: Got endpoints: latency-svc-42tsz [869.224404ms] Jan 28 21:24:08.760: INFO: Created: latency-svc-hczwx Jan 28 21:24:08.766: INFO: Got endpoints: latency-svc-hczwx [759.362058ms] Jan 28 21:24:08.805: INFO: Created: latency-svc-qr8x4 Jan 28 21:24:08.808: INFO: Got endpoints: latency-svc-qr8x4 [780.91745ms] Jan 28 21:24:08.831: INFO: Created: latency-svc-lx9q4 Jan 28 21:24:08.833: INFO: Got endpoints: latency-svc-lx9q4 [784.830478ms] Jan 28 21:24:08.853: INFO: Created: latency-svc-4bzg2 Jan 28 21:24:08.881: INFO: Got endpoints: latency-svc-4bzg2 [811.024774ms] Jan 28 21:24:08.905: INFO: Created: latency-svc-q8t7f Jan 28 21:24:08.918: INFO: Got endpoints: latency-svc-q8t7f [762.798612ms] Jan 28 21:24:08.942: INFO: Created: latency-svc-pmspc Jan 28 21:24:08.966: INFO: Got endpoints: latency-svc-pmspc [775.973495ms] Jan 28 21:24:09.034: INFO: Created: latency-svc-924rr Jan 28 21:24:09.043: INFO: Got endpoints: latency-svc-924rr [824.965815ms] Jan 28 21:24:09.060: INFO: Created: latency-svc-zx7d5 Jan 28 21:24:09.080: INFO: Got endpoints: latency-svc-zx7d5 [743.306779ms] Jan 28 21:24:09.086: INFO: Created: latency-svc-b4vd9 Jan 28 21:24:09.110: INFO: Got endpoints: latency-svc-b4vd9 [691.278885ms] Jan 28 21:24:09.133: INFO: Created: latency-svc-2gnnq Jan 28 21:24:09.200: INFO: Got endpoints: latency-svc-2gnnq [676.907298ms] Jan 28 21:24:09.250: INFO: Created: latency-svc-5mv4f Jan 28 21:24:09.268: INFO: Got endpoints: latency-svc-5mv4f [719.080902ms] Jan 28 21:24:09.289: INFO: Created: latency-svc-8l72b Jan 28 21:24:09.357: INFO: Got endpoints: latency-svc-8l72b [733.964828ms] Jan 28 21:24:09.362: INFO: Created: latency-svc-7q7s7 Jan 28 21:24:09.366: INFO: Got endpoints: latency-svc-7q7s7 [727.537537ms] Jan 28 21:24:09.384: INFO: Created: latency-svc-xjffg Jan 28 21:24:09.394: INFO: Got endpoints: latency-svc-xjffg [689.287057ms] Jan 28 21:24:09.511: INFO: Created: latency-svc-ck2ft Jan 28 21:24:09.547: INFO: Created: latency-svc-gghd5 Jan 28 21:24:09.551: INFO: Got endpoints: latency-svc-ck2ft [793.015427ms] Jan 28 21:24:09.563: INFO: Got endpoints: latency-svc-gghd5 [796.529413ms] Jan 28 21:24:09.587: INFO: Created: latency-svc-kdn42 Jan 28 21:24:09.595: INFO: Got endpoints: latency-svc-kdn42 [786.587605ms] Jan 28 21:24:09.650: INFO: Created: latency-svc-b5cfv Jan 28 21:24:09.651: INFO: Got endpoints: latency-svc-b5cfv [818.287971ms] Jan 28 21:24:09.684: INFO: Created: latency-svc-v2hgr Jan 28 21:24:09.703: INFO: Got endpoints: latency-svc-v2hgr [821.220091ms] Jan 28 21:24:09.710: INFO: Created: latency-svc-8tzr6 Jan 28 21:24:09.716: INFO: Got endpoints: latency-svc-8tzr6 [798.228821ms] Jan 28 21:24:09.734: INFO: Created: latency-svc-ldtn7 Jan 28 21:24:09.744: INFO: Got endpoints: latency-svc-ldtn7 [778.237327ms] Jan 28 21:24:09.817: INFO: Created: latency-svc-bv5c9 Jan 28 21:24:09.829: INFO: Got endpoints: latency-svc-bv5c9 [785.671667ms] Jan 28 21:24:09.867: INFO: Created: latency-svc-chsm5 Jan 28 21:24:09.902: INFO: Created: latency-svc-v7g68 Jan 28 21:24:09.903: INFO: Got endpoints: latency-svc-chsm5 [822.550784ms] Jan 28 21:24:09.910: INFO: Got endpoints: latency-svc-v7g68 [799.441704ms] Jan 28 21:24:09.978: INFO: Created: latency-svc-bt59c Jan 28 21:24:09.985: INFO: Got endpoints: latency-svc-bt59c [784.524161ms] Jan 28 21:24:10.009: INFO: Created: latency-svc-pvcv8 Jan 28 21:24:10.019: INFO: Got endpoints: latency-svc-pvcv8 [750.616564ms] Jan 28 21:24:10.046: INFO: Created: latency-svc-wktrm Jan 28 21:24:10.099: INFO: Created: latency-svc-hbkfw Jan 28 21:24:10.100: INFO: Got endpoints: latency-svc-wktrm [742.585559ms] Jan 28 21:24:10.104: INFO: Got endpoints: latency-svc-hbkfw [737.407039ms] Jan 28 21:24:10.135: INFO: Created: latency-svc-8brkm Jan 28 21:24:10.144: INFO: Got endpoints: latency-svc-8brkm [749.865579ms] Jan 28 21:24:10.166: INFO: Created: latency-svc-g897p Jan 28 21:24:10.171: INFO: Got endpoints: latency-svc-g897p [620.291925ms] Jan 28 21:24:10.254: INFO: Created: latency-svc-sqzk9 Jan 28 21:24:10.279: INFO: Got endpoints: latency-svc-sqzk9 [715.567724ms] Jan 28 21:24:10.283: INFO: Created: latency-svc-m6brj Jan 28 21:24:10.297: INFO: Got endpoints: latency-svc-m6brj [701.861661ms] Jan 28 21:24:10.395: INFO: Created: latency-svc-8hxp2 Jan 28 21:24:10.398: INFO: Got endpoints: latency-svc-8hxp2 [746.529115ms] Jan 28 21:24:10.435: INFO: Created: latency-svc-dfgnq Jan 28 21:24:10.441: INFO: Got endpoints: latency-svc-dfgnq [737.607376ms] Jan 28 21:24:10.466: INFO: Created: latency-svc-2hclg Jan 28 21:24:10.524: INFO: Got endpoints: latency-svc-2hclg [808.073644ms] Jan 28 21:24:10.529: INFO: Created: latency-svc-x7gcb Jan 28 21:24:10.548: INFO: Got endpoints: latency-svc-x7gcb [803.522668ms] Jan 28 21:24:10.549: INFO: Created: latency-svc-lzxd9 Jan 28 21:24:10.561: INFO: Got endpoints: latency-svc-lzxd9 [731.806998ms] Jan 28 21:24:10.573: INFO: Created: latency-svc-7zb79 Jan 28 21:24:10.581: INFO: Got endpoints: latency-svc-7zb79 [677.355602ms] Jan 28 21:24:10.614: INFO: Created: latency-svc-c7lfz Jan 28 21:24:10.684: INFO: Got endpoints: latency-svc-c7lfz [774.636049ms] Jan 28 21:24:10.709: INFO: Created: latency-svc-sjbst Jan 28 21:24:10.714: INFO: Got endpoints: latency-svc-sjbst [728.784372ms] Jan 28 21:24:10.731: INFO: Created: latency-svc-ldhtd Jan 28 21:24:10.736: INFO: Got endpoints: latency-svc-ldhtd [717.269435ms] Jan 28 21:24:10.759: INFO: Created: latency-svc-flsp4 Jan 28 21:24:10.766: INFO: Got endpoints: latency-svc-flsp4 [666.567742ms] Jan 28 21:24:10.834: INFO: Created: latency-svc-wvd94 Jan 28 21:24:10.856: INFO: Created: latency-svc-gj6ml Jan 28 21:24:10.857: INFO: Got endpoints: latency-svc-wvd94 [752.411459ms] Jan 28 21:24:10.869: INFO: Got endpoints: latency-svc-gj6ml [724.655149ms] Jan 28 21:24:10.903: INFO: Created: latency-svc-h4k6k Jan 28 21:24:10.920: INFO: Got endpoints: latency-svc-h4k6k [748.794918ms] Jan 28 21:24:10.924: INFO: Created: latency-svc-jct6j Jan 28 21:24:10.961: INFO: Got endpoints: latency-svc-jct6j [682.07873ms] Jan 28 21:24:10.983: INFO: Created: latency-svc-vnwll Jan 28 21:24:10.987: INFO: Got endpoints: latency-svc-vnwll [689.57265ms] Jan 28 21:24:11.011: INFO: Created: latency-svc-5bg6l Jan 28 21:24:11.014: INFO: Got endpoints: latency-svc-5bg6l [616.533858ms] Jan 28 21:24:11.028: INFO: Created: latency-svc-sm9fl Jan 28 21:24:11.047: INFO: Got endpoints: latency-svc-sm9fl [606.190121ms] Jan 28 21:24:11.097: INFO: Created: latency-svc-p9rcm Jan 28 21:24:11.100: INFO: Got endpoints: latency-svc-p9rcm [575.321328ms] Jan 28 21:24:11.138: INFO: Created: latency-svc-26cxd Jan 28 21:24:11.142: INFO: Got endpoints: latency-svc-26cxd [594.041901ms] Jan 28 21:24:11.167: INFO: Created: latency-svc-mbjf7 Jan 28 21:24:11.272: INFO: Got endpoints: latency-svc-mbjf7 [710.510684ms] Jan 28 21:24:11.312: INFO: Created: latency-svc-lcxfh Jan 28 21:24:11.323: INFO: Got endpoints: latency-svc-lcxfh [741.894889ms] Jan 28 21:24:11.348: INFO: Created: latency-svc-tvvt8 Jan 28 21:24:11.365: INFO: Got endpoints: latency-svc-tvvt8 [680.654481ms] Jan 28 21:24:11.440: INFO: Created: latency-svc-7qt5k Jan 28 21:24:11.445: INFO: Got endpoints: latency-svc-7qt5k [731.231669ms] Jan 28 21:24:11.495: INFO: Created: latency-svc-t82gz Jan 28 21:24:11.505: INFO: Got endpoints: latency-svc-t82gz [768.779854ms] Jan 28 21:24:11.591: INFO: Created: latency-svc-7gqwm Jan 28 21:24:11.610: INFO: Got endpoints: latency-svc-7gqwm [843.528136ms] Jan 28 21:24:11.613: INFO: Created: latency-svc-nsklr Jan 28 21:24:11.630: INFO: Got endpoints: latency-svc-nsklr [773.335523ms] Jan 28 21:24:11.651: INFO: Created: latency-svc-fwdzh Jan 28 21:24:11.655: INFO: Got endpoints: latency-svc-fwdzh [785.570825ms] Jan 28 21:24:11.740: INFO: Created: latency-svc-gj2xj Jan 28 21:24:11.760: INFO: Got endpoints: latency-svc-gj2xj [839.373284ms] Jan 28 21:24:11.763: INFO: Created: latency-svc-ghnqj Jan 28 21:24:11.765: INFO: Got endpoints: latency-svc-ghnqj [803.671804ms] Jan 28 21:24:11.791: INFO: Created: latency-svc-4lxqh Jan 28 21:24:11.798: INFO: Got endpoints: latency-svc-4lxqh [811.074297ms] Jan 28 21:24:11.816: INFO: Created: latency-svc-2dxgg Jan 28 21:24:11.821: INFO: Got endpoints: latency-svc-2dxgg [806.338313ms] Jan 28 21:24:11.846: INFO: Created: latency-svc-hcqdk Jan 28 21:24:11.894: INFO: Got endpoints: latency-svc-hcqdk [847.065512ms] Jan 28 21:24:11.900: INFO: Created: latency-svc-fnk9b Jan 28 21:24:11.925: INFO: Created: latency-svc-lx95n Jan 28 21:24:11.926: INFO: Got endpoints: latency-svc-fnk9b [825.764279ms] Jan 28 21:24:11.938: INFO: Got endpoints: latency-svc-lx95n [795.54843ms] Jan 28 21:24:11.973: INFO: Created: latency-svc-c2tpz Jan 28 21:24:11.994: INFO: Got endpoints: latency-svc-c2tpz [722.262291ms] Jan 28 21:24:12.069: INFO: Created: latency-svc-xs4t7 Jan 28 21:24:12.071: INFO: Got endpoints: latency-svc-xs4t7 [747.693331ms] Jan 28 21:24:12.109: INFO: Created: latency-svc-q4zbh Jan 28 21:24:12.112: INFO: Got endpoints: latency-svc-q4zbh [746.513676ms] Jan 28 21:24:12.136: INFO: Created: latency-svc-ldb2r Jan 28 21:24:12.142: INFO: Got endpoints: latency-svc-ldb2r [696.780732ms] Jan 28 21:24:12.248: INFO: Created: latency-svc-wjgcg Jan 28 21:24:12.280: INFO: Got endpoints: latency-svc-wjgcg [775.172721ms] Jan 28 21:24:12.308: INFO: Created: latency-svc-89zn6 Jan 28 21:24:12.320: INFO: Got endpoints: latency-svc-89zn6 [710.000104ms] Jan 28 21:24:12.321: INFO: Latencies: [74.573378ms 151.190873ms 209.167102ms 346.445728ms 489.432673ms 505.332057ms 569.878652ms 575.321328ms 590.510155ms 591.97536ms 594.041901ms 595.920783ms 606.190121ms 616.533858ms 616.659892ms 620.291925ms 636.411928ms 638.198326ms 642.479135ms 643.839965ms 652.201878ms 660.339607ms 660.815535ms 663.856676ms 666.567742ms 676.664924ms 676.907298ms 677.355602ms 680.654481ms 680.706293ms 682.07873ms 683.130003ms 683.869587ms 684.453878ms 687.355155ms 689.287057ms 689.57265ms 691.278885ms 693.68883ms 695.545621ms 696.780732ms 700.722572ms 701.861661ms 702.316855ms 706.003652ms 708.80641ms 709.280285ms 709.931027ms 710.000104ms 710.510684ms 711.364582ms 714.389112ms 715.567724ms 717.269435ms 719.080902ms 719.168541ms 722.262291ms 723.685652ms 724.655149ms 727.537537ms 728.784372ms 729.194116ms 730.165581ms 731.231669ms 731.806998ms 733.964828ms 735.383349ms 736.386419ms 737.119693ms 737.266391ms 737.407039ms 737.607376ms 741.894889ms 742.067102ms 742.585559ms 743.306779ms 746.513676ms 746.529115ms 747.130358ms 747.693331ms 748.794918ms 749.865579ms 750.616564ms 752.411459ms 753.569586ms 758.467664ms 759.362058ms 762.798612ms 768.779854ms 772.564809ms 773.335523ms 774.276931ms 774.355451ms 774.636049ms 775.172721ms 775.973495ms 778.237327ms 780.91745ms 781.30125ms 782.877292ms 784.524161ms 784.830478ms 785.570825ms 785.671667ms 786.587605ms 788.625004ms 793.015427ms 793.577877ms 794.855767ms 795.54843ms 796.529413ms 796.847526ms 798.228821ms 799.441704ms 803.522668ms 803.671804ms 805.451873ms 806.338313ms 806.653483ms 808.073644ms 809.728832ms 811.024774ms 811.074297ms 812.819962ms 814.813669ms 818.287971ms 818.382276ms 821.220091ms 822.550784ms 823.78724ms 824.965815ms 825.764279ms 833.131078ms 836.993456ms 837.62126ms 839.373284ms 839.694748ms 840.646338ms 843.528136ms 847.065512ms 849.372061ms 850.188356ms 852.552059ms 858.80429ms 860.686666ms 867.795605ms 869.224404ms 870.42111ms 871.78055ms 877.058006ms 877.836662ms 880.687788ms 880.997956ms 881.798209ms 882.939269ms 896.264347ms 908.014364ms 917.907391ms 918.182145ms 927.221788ms 935.160871ms 936.406786ms 954.481543ms 964.063759ms 965.113557ms 965.854442ms 982.475998ms 984.209551ms 986.49551ms 986.946427ms 991.272192ms 995.058928ms 995.283656ms 996.445239ms 997.098093ms 997.281702ms 1.001194955s 1.006505533s 1.006681083s 1.006715516s 1.011518563s 1.041182021s 1.063123991s 1.089022979s 1.102793667s 1.180134502s 1.189539838s 1.192264728s 1.203185524s 1.234775606s 1.34049568s 1.363875316s 1.396286075s 1.405933482s 1.453028718s 1.562179296s 1.587185555s 1.607366305s 1.610005926s 1.740189213s] Jan 28 21:24:12.321: INFO: 50 %ile: 784.524161ms Jan 28 21:24:12.321: INFO: 90 %ile: 1.011518563s Jan 28 21:24:12.321: INFO: 99 %ile: 1.610005926s Jan 28 21:24:12.321: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:24:12.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-7464" for this suite. • [SLOW TEST:18.743 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":278,"completed":38,"skipped":641,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:24:12.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jan 28 21:24:12.505: INFO: Waiting up to 5m0s for pod "downwardapi-volume-000b279b-3e6b-427c-8424-70532f05f94f" in namespace "downward-api-1109" to be "success or failure" Jan 28 21:24:12.524: INFO: Pod "downwardapi-volume-000b279b-3e6b-427c-8424-70532f05f94f": Phase="Pending", Reason="", readiness=false. Elapsed: 19.509786ms Jan 28 21:24:14.538: INFO: Pod "downwardapi-volume-000b279b-3e6b-427c-8424-70532f05f94f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033149694s Jan 28 21:24:16.553: INFO: Pod "downwardapi-volume-000b279b-3e6b-427c-8424-70532f05f94f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047746293s Jan 28 21:24:18.573: INFO: Pod "downwardapi-volume-000b279b-3e6b-427c-8424-70532f05f94f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068461112s Jan 28 21:24:20.591: INFO: Pod "downwardapi-volume-000b279b-3e6b-427c-8424-70532f05f94f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.086083259s STEP: Saw pod success Jan 28 21:24:20.591: INFO: Pod "downwardapi-volume-000b279b-3e6b-427c-8424-70532f05f94f" satisfied condition "success or failure" Jan 28 21:24:20.654: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-000b279b-3e6b-427c-8424-70532f05f94f container client-container: STEP: delete the pod Jan 28 21:24:20.789: INFO: Waiting for pod downwardapi-volume-000b279b-3e6b-427c-8424-70532f05f94f to disappear Jan 28 21:24:20.987: INFO: Pod downwardapi-volume-000b279b-3e6b-427c-8424-70532f05f94f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:24:20.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1109" for this suite. • [SLOW TEST:8.646 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":39,"skipped":644,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:24:21.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:24:35.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6347" for this suite. • [SLOW TEST:14.459 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":40,"skipped":671,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:24:35.488: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC Jan 28 21:24:35.662: INFO: namespace kubectl-7315 Jan 28 21:24:35.663: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7315' Jan 28 21:24:36.066: INFO: stderr: "" Jan 28 21:24:36.067: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Jan 28 21:24:37.075: INFO: Selector matched 1 pods for map[app:agnhost] Jan 28 21:24:37.076: INFO: Found 0 / 1 Jan 28 21:24:38.101: INFO: Selector matched 1 pods for map[app:agnhost] Jan 28 21:24:38.101: INFO: Found 0 / 1 Jan 28 21:24:39.077: INFO: Selector matched 1 pods for map[app:agnhost] Jan 28 21:24:39.077: INFO: Found 0 / 1 Jan 28 21:24:40.082: INFO: Selector matched 1 pods for map[app:agnhost] Jan 28 21:24:40.082: INFO: Found 0 / 1 Jan 28 21:24:41.083: INFO: Selector matched 1 pods for map[app:agnhost] Jan 28 21:24:41.083: INFO: Found 0 / 1 Jan 28 21:24:42.139: INFO: Selector matched 1 pods for map[app:agnhost] Jan 28 21:24:42.139: INFO: Found 0 / 1 Jan 28 21:24:43.114: INFO: Selector matched 1 pods for map[app:agnhost] Jan 28 21:24:43.114: INFO: Found 0 / 1 Jan 28 21:24:44.074: INFO: Selector matched 1 pods for map[app:agnhost] Jan 28 21:24:44.074: INFO: Found 0 / 1 Jan 28 21:24:45.096: INFO: Selector matched 1 pods for map[app:agnhost] Jan 28 21:24:45.096: INFO: Found 0 / 1 Jan 28 21:24:46.190: INFO: Selector matched 1 pods for map[app:agnhost] Jan 28 21:24:46.191: INFO: Found 1 / 1 Jan 28 21:24:46.191: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 28 21:24:46.200: INFO: Selector matched 1 pods for map[app:agnhost] Jan 28 21:24:46.200: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 28 21:24:46.200: INFO: wait on agnhost-master startup in kubectl-7315 Jan 28 21:24:46.201: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-n94n4 agnhost-master --namespace=kubectl-7315' Jan 28 21:24:46.486: INFO: stderr: "" Jan 28 21:24:46.487: INFO: stdout: "Paused\n" STEP: exposing RC Jan 28 21:24:46.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-7315' Jan 28 21:24:46.740: INFO: stderr: "" Jan 28 21:24:46.740: INFO: stdout: "service/rm2 exposed\n" Jan 28 21:24:46.752: INFO: Service rm2 in namespace kubectl-7315 found. STEP: exposing service Jan 28 21:24:48.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-7315' Jan 28 21:24:49.034: INFO: stderr: "" Jan 28 21:24:49.035: INFO: stdout: "service/rm3 exposed\n" Jan 28 21:24:49.039: INFO: Service rm3 in namespace kubectl-7315 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:24:51.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7315" for this suite. • [SLOW TEST:15.600 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1275 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":278,"completed":41,"skipped":673,"failed":0} [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:24:51.088: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-4784 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-4784 STEP: creating replication controller externalsvc in namespace services-4784 I0128 21:24:51.552349 8 runners.go:189] Created replication controller with name: externalsvc, namespace: services-4784, replica count: 2 I0128 21:24:54.603323 8 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0128 21:24:57.604283 8 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0128 21:25:00.604941 8 runners.go:189] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0128 21:25:03.606108 8 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Jan 28 21:25:03.661: INFO: Creating new exec pod Jan 28 21:25:11.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4784 execpodx5q2k -- /bin/sh -x -c nslookup clusterip-service' Jan 28 21:25:12.231: INFO: stderr: "I0128 21:25:12.031470 775 log.go:172] (0xc0009aa000) (0xc0006bbae0) Create stream\nI0128 21:25:12.031644 775 log.go:172] (0xc0009aa000) (0xc0006bbae0) Stream added, broadcasting: 1\nI0128 21:25:12.037200 775 log.go:172] (0xc0009aa000) Reply frame received for 1\nI0128 21:25:12.037237 775 log.go:172] (0xc0009aa000) (0xc000980000) Create stream\nI0128 21:25:12.037244 775 log.go:172] (0xc0009aa000) (0xc000980000) Stream added, broadcasting: 3\nI0128 21:25:12.038308 775 log.go:172] (0xc0009aa000) Reply frame received for 3\nI0128 21:25:12.038328 775 log.go:172] (0xc0009aa000) (0xc0006bbb80) Create stream\nI0128 21:25:12.038339 775 log.go:172] (0xc0009aa000) (0xc0006bbb80) Stream added, broadcasting: 5\nI0128 21:25:12.040405 775 log.go:172] (0xc0009aa000) Reply frame received for 5\nI0128 21:25:12.132978 775 log.go:172] (0xc0009aa000) Data frame received for 5\nI0128 21:25:12.133085 775 log.go:172] (0xc0006bbb80) (5) Data frame handling\nI0128 21:25:12.133111 775 log.go:172] (0xc0006bbb80) (5) Data frame sent\n+ nslookup clusterip-service\nI0128 21:25:12.140558 775 log.go:172] (0xc0009aa000) Data frame received for 3\nI0128 21:25:12.140636 775 log.go:172] (0xc000980000) (3) Data frame handling\nI0128 21:25:12.140669 775 log.go:172] (0xc000980000) (3) Data frame sent\nI0128 21:25:12.141814 775 log.go:172] (0xc0009aa000) Data frame received for 3\nI0128 21:25:12.141848 775 log.go:172] (0xc000980000) (3) Data frame handling\nI0128 21:25:12.141868 775 log.go:172] (0xc000980000) (3) Data frame sent\nI0128 21:25:12.219492 775 log.go:172] (0xc0009aa000) Data frame received for 1\nI0128 21:25:12.219614 775 log.go:172] (0xc0006bbae0) (1) Data frame handling\nI0128 21:25:12.219652 775 log.go:172] (0xc0006bbae0) (1) Data frame sent\nI0128 21:25:12.220490 775 log.go:172] (0xc0009aa000) (0xc0006bbb80) Stream removed, broadcasting: 5\nI0128 21:25:12.220552 775 log.go:172] (0xc0009aa000) (0xc0006bbae0) Stream removed, broadcasting: 1\nI0128 21:25:12.221544 775 log.go:172] (0xc0009aa000) (0xc000980000) Stream removed, broadcasting: 3\nI0128 21:25:12.221965 775 log.go:172] (0xc0009aa000) (0xc0006bbae0) Stream removed, broadcasting: 1\nI0128 21:25:12.221987 775 log.go:172] (0xc0009aa000) (0xc000980000) Stream removed, broadcasting: 3\nI0128 21:25:12.222004 775 log.go:172] (0xc0009aa000) (0xc0006bbb80) Stream removed, broadcasting: 5\nI0128 21:25:12.222540 775 log.go:172] (0xc0009aa000) Go away received\n" Jan 28 21:25:12.231: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-4784.svc.cluster.local\tcanonical name = externalsvc.services-4784.svc.cluster.local.\nName:\texternalsvc.services-4784.svc.cluster.local\nAddress: 10.96.123.238\n\n" STEP: deleting ReplicationController externalsvc in namespace services-4784, will wait for the garbage collector to delete the pods Jan 28 21:25:12.297: INFO: Deleting ReplicationController externalsvc took: 7.583092ms Jan 28 21:25:12.397: INFO: Terminating ReplicationController externalsvc pods took: 100.494876ms Jan 28 21:25:23.306: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:25:23.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4784" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:32.295 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":42,"skipped":673,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:25:23.384: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-caa2a194-2178-49a4-862a-2d9cbfec9889 STEP: Creating a pod to test consume secrets Jan 28 21:25:23.649: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6be758ee-943d-4aff-9b2e-4d7dada9496c" in namespace "projected-1173" to be "success or failure" Jan 28 21:25:23.792: INFO: Pod "pod-projected-secrets-6be758ee-943d-4aff-9b2e-4d7dada9496c": Phase="Pending", Reason="", readiness=false. Elapsed: 142.240213ms Jan 28 21:25:25.799: INFO: Pod "pod-projected-secrets-6be758ee-943d-4aff-9b2e-4d7dada9496c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.14980031s Jan 28 21:25:27.807: INFO: Pod "pod-projected-secrets-6be758ee-943d-4aff-9b2e-4d7dada9496c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.157585616s Jan 28 21:25:29.818: INFO: Pod "pod-projected-secrets-6be758ee-943d-4aff-9b2e-4d7dada9496c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.168439028s Jan 28 21:25:31.836: INFO: Pod "pod-projected-secrets-6be758ee-943d-4aff-9b2e-4d7dada9496c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.186309097s Jan 28 21:25:33.870: INFO: Pod "pod-projected-secrets-6be758ee-943d-4aff-9b2e-4d7dada9496c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.22012781s STEP: Saw pod success Jan 28 21:25:33.870: INFO: Pod "pod-projected-secrets-6be758ee-943d-4aff-9b2e-4d7dada9496c" satisfied condition "success or failure" Jan 28 21:25:33.877: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-6be758ee-943d-4aff-9b2e-4d7dada9496c container projected-secret-volume-test: STEP: delete the pod Jan 28 21:25:33.998: INFO: Waiting for pod pod-projected-secrets-6be758ee-943d-4aff-9b2e-4d7dada9496c to disappear Jan 28 21:25:34.006: INFO: Pod pod-projected-secrets-6be758ee-943d-4aff-9b2e-4d7dada9496c no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:25:34.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1173" for this suite. • [SLOW TEST:10.638 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":43,"skipped":707,"failed":0} SSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:25:34.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-6d9f4bf2-1efb-4b7e-be8b-82c404d5e370 STEP: Creating secret with name s-test-opt-upd-5f3cfd63-7dfa-4bdd-847a-da231abc3079 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-6d9f4bf2-1efb-4b7e-be8b-82c404d5e370 STEP: Updating secret s-test-opt-upd-5f3cfd63-7dfa-4bdd-847a-da231abc3079 STEP: Creating secret with name s-test-opt-create-9ee4c5d6-bff2-4f77-85a5-a35fe57ba802 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:27:19.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-42" for this suite. • [SLOW TEST:105.735 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":44,"skipped":711,"failed":0} SSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:27:19.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jan 28 21:27:36.041: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 28 21:27:36.053: INFO: Pod pod-with-prestop-exec-hook still exists Jan 28 21:27:38.054: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 28 21:27:38.066: INFO: Pod pod-with-prestop-exec-hook still exists Jan 28 21:27:40.054: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 28 21:27:40.061: INFO: Pod pod-with-prestop-exec-hook still exists Jan 28 21:27:42.054: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 28 21:27:42.063: INFO: Pod pod-with-prestop-exec-hook still exists Jan 28 21:27:44.054: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 28 21:27:44.060: INFO: Pod pod-with-prestop-exec-hook still exists Jan 28 21:27:46.054: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 28 21:27:46.066: INFO: Pod pod-with-prestop-exec-hook still exists Jan 28 21:27:48.054: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 28 21:27:48.062: INFO: Pod pod-with-prestop-exec-hook still exists Jan 28 21:27:50.054: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 28 21:27:50.065: INFO: Pod pod-with-prestop-exec-hook still exists Jan 28 21:27:52.054: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 28 21:27:52.064: INFO: Pod pod-with-prestop-exec-hook still exists Jan 28 21:27:54.054: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 28 21:27:54.063: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:27:54.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5725" for this suite. • [SLOW TEST:34.355 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":45,"skipped":714,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:27:54.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name secret-emptykey-test-57508a4d-a2c6-459e-a23a-380204c0923c [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:27:54.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7121" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":46,"skipped":728,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:27:54.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 28 21:27:54.774: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 28 21:27:56.789: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715843674, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715843674, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715843674, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715843674, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 28 21:27:58.796: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715843674, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715843674, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715843674, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715843674, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 28 21:28:00.794: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715843674, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715843674, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715843674, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715843674, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 28 21:28:02.801: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715843674, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715843674, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715843674, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715843674, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 28 21:28:04.796: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715843674, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715843674, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715843674, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715843674, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 28 21:28:07.826: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Jan 28 21:28:15.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-8006 to-be-attached-pod -i -c=container1' Jan 28 21:28:16.071: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:28:16.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8006" for this suite. STEP: Destroying namespace "webhook-8006-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:22.112 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":47,"skipped":732,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:28:16.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 28 21:28:16.763: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 28 21:28:18.784: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715843696, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715843696, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715843696, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715843696, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 28 21:28:20.797: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715843696, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715843696, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715843696, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715843696, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 28 21:28:22.794: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715843696, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715843696, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715843696, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715843696, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 28 21:28:24.790: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715843696, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715843696, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715843696, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715843696, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 28 21:28:27.886: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:28:28.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6921" for this suite. STEP: Destroying namespace "webhook-6921-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.320 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":48,"skipped":756,"failed":0} S ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:28:28.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jan 28 21:28:48.805: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 28 21:28:48.815: INFO: Pod pod-with-poststart-exec-hook still exists Jan 28 21:28:50.816: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 28 21:28:50.831: INFO: Pod pod-with-poststart-exec-hook still exists Jan 28 21:28:52.815: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 28 21:28:52.824: INFO: Pod pod-with-poststart-exec-hook still exists Jan 28 21:28:54.815: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 28 21:28:54.822: INFO: Pod pod-with-poststart-exec-hook still exists Jan 28 21:28:56.815: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 28 21:28:56.825: INFO: Pod pod-with-poststart-exec-hook still exists Jan 28 21:28:58.816: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 28 21:28:58.826: INFO: Pod pod-with-poststart-exec-hook still exists Jan 28 21:29:00.816: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 28 21:29:00.831: INFO: Pod pod-with-poststart-exec-hook still exists Jan 28 21:29:02.815: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 28 21:29:02.829: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:29:02.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6179" for this suite. • [SLOW TEST:34.219 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":49,"skipped":757,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:29:02.842: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Jan 28 21:29:03.838: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Jan 28 21:29:05.866: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715843743, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715843743, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715843743, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715843743, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 28 21:29:07.885: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715843743, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715843743, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715843743, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715843743, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 28 21:29:09.881: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715843743, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715843743, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715843743, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715843743, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 28 21:29:11.872: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715843743, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715843743, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715843743, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715843743, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 28 21:29:14.953: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 28 21:29:14.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:29:16.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-6344" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:13.794 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":50,"skipped":766,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:29:16.637: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 28 21:29:40.760: INFO: Container started at 2020-01-28 21:29:24 +0000 UTC, pod became ready at 2020-01-28 21:29:40 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:29:40.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1011" for this suite. • [SLOW TEST:24.139 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":51,"skipped":788,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:29:40.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with configMap that has name projected-configmap-test-upd-cdf034a5-91a8-4b12-8ef2-0cb4be63f712 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-cdf034a5-91a8-4b12-8ef2-0cb4be63f712 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:30:56.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1422" for this suite. • [SLOW TEST:75.442 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":52,"skipped":804,"failed":0} SSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:30:56.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-8773 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-8773 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-8773 Jan 28 21:30:56.834: INFO: Found 0 stateful pods, waiting for 1 Jan 28 21:31:06.840: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Jan 28 21:31:06.850: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8773 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 28 21:31:07.324: INFO: stderr: "I0128 21:31:07.071321 815 log.go:172] (0xc000af6000) (0xc0009121e0) Create stream\nI0128 21:31:07.071571 815 log.go:172] (0xc000af6000) (0xc0009121e0) Stream added, broadcasting: 1\nI0128 21:31:07.073990 815 log.go:172] (0xc000af6000) Reply frame received for 1\nI0128 21:31:07.074024 815 log.go:172] (0xc000af6000) (0xc000912280) Create stream\nI0128 21:31:07.074030 815 log.go:172] (0xc000af6000) (0xc000912280) Stream added, broadcasting: 3\nI0128 21:31:07.074940 815 log.go:172] (0xc000af6000) Reply frame received for 3\nI0128 21:31:07.074966 815 log.go:172] (0xc000af6000) (0xc0007e20a0) Create stream\nI0128 21:31:07.074972 815 log.go:172] (0xc000af6000) (0xc0007e20a0) Stream added, broadcasting: 5\nI0128 21:31:07.076894 815 log.go:172] (0xc000af6000) Reply frame received for 5\nI0128 21:31:07.150415 815 log.go:172] (0xc000af6000) Data frame received for 5\nI0128 21:31:07.150510 815 log.go:172] (0xc0007e20a0) (5) Data frame handling\nI0128 21:31:07.150537 815 log.go:172] (0xc0007e20a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0128 21:31:07.189877 815 log.go:172] (0xc000af6000) Data frame received for 3\nI0128 21:31:07.190054 815 log.go:172] (0xc000912280) (3) Data frame handling\nI0128 21:31:07.190088 815 log.go:172] (0xc000912280) (3) Data frame sent\nI0128 21:31:07.310124 815 log.go:172] (0xc000af6000) Data frame received for 1\nI0128 21:31:07.310528 815 log.go:172] (0xc000af6000) (0xc000912280) Stream removed, broadcasting: 3\nI0128 21:31:07.310670 815 log.go:172] (0xc0009121e0) (1) Data frame handling\nI0128 21:31:07.310700 815 log.go:172] (0xc0009121e0) (1) Data frame sent\nI0128 21:31:07.310749 815 log.go:172] (0xc000af6000) (0xc0007e20a0) Stream removed, broadcasting: 5\nI0128 21:31:07.310772 815 log.go:172] (0xc000af6000) (0xc0009121e0) Stream removed, broadcasting: 1\nI0128 21:31:07.310788 815 log.go:172] (0xc000af6000) Go away received\nI0128 21:31:07.312527 815 log.go:172] (0xc000af6000) (0xc0009121e0) Stream removed, broadcasting: 1\nI0128 21:31:07.312555 815 log.go:172] (0xc000af6000) (0xc000912280) Stream removed, broadcasting: 3\nI0128 21:31:07.312563 815 log.go:172] (0xc000af6000) (0xc0007e20a0) Stream removed, broadcasting: 5\n" Jan 28 21:31:07.324: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 28 21:31:07.324: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 28 21:31:07.330: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 28 21:31:07.330: INFO: Waiting for statefulset status.replicas updated to 0 Jan 28 21:31:07.351: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999996288s Jan 28 21:31:08.362: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.990959897s Jan 28 21:31:09.372: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.979242248s Jan 28 21:31:10.802: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.970072415s Jan 28 21:31:11.821: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.539748547s Jan 28 21:31:12.831: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.520070445s Jan 28 21:31:13.841: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.510766361s Jan 28 21:31:14.850: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.500600953s Jan 28 21:31:15.865: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.492147284s Jan 28 21:31:16.879: INFO: Verifying statefulset ss doesn't scale past 1 for another 476.800715ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-8773 Jan 28 21:31:17.891: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8773 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 28 21:31:18.318: INFO: stderr: "I0128 21:31:18.134014 835 log.go:172] (0xc000b56840) (0xc0008fe000) Create stream\nI0128 21:31:18.134294 835 log.go:172] (0xc000b56840) (0xc0008fe000) Stream added, broadcasting: 1\nI0128 21:31:18.137896 835 log.go:172] (0xc000b56840) Reply frame received for 1\nI0128 21:31:18.137975 835 log.go:172] (0xc000b56840) (0xc000970000) Create stream\nI0128 21:31:18.137992 835 log.go:172] (0xc000b56840) (0xc000970000) Stream added, broadcasting: 3\nI0128 21:31:18.140307 835 log.go:172] (0xc000b56840) Reply frame received for 3\nI0128 21:31:18.140340 835 log.go:172] (0xc000b56840) (0xc00091c000) Create stream\nI0128 21:31:18.140349 835 log.go:172] (0xc000b56840) (0xc00091c000) Stream added, broadcasting: 5\nI0128 21:31:18.141895 835 log.go:172] (0xc000b56840) Reply frame received for 5\nI0128 21:31:18.205019 835 log.go:172] (0xc000b56840) Data frame received for 5\nI0128 21:31:18.205244 835 log.go:172] (0xc00091c000) (5) Data frame handling\nI0128 21:31:18.205325 835 log.go:172] (0xc00091c000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0128 21:31:18.206826 835 log.go:172] (0xc000b56840) Data frame received for 3\nI0128 21:31:18.206855 835 log.go:172] (0xc000970000) (3) Data frame handling\nI0128 21:31:18.206891 835 log.go:172] (0xc000970000) (3) Data frame sent\nI0128 21:31:18.302505 835 log.go:172] (0xc000b56840) (0xc000970000) Stream removed, broadcasting: 3\nI0128 21:31:18.302685 835 log.go:172] (0xc000b56840) Data frame received for 1\nI0128 21:31:18.302705 835 log.go:172] (0xc000b56840) (0xc00091c000) Stream removed, broadcasting: 5\nI0128 21:31:18.302740 835 log.go:172] (0xc0008fe000) (1) Data frame handling\nI0128 21:31:18.302763 835 log.go:172] (0xc0008fe000) (1) Data frame sent\nI0128 21:31:18.302774 835 log.go:172] (0xc000b56840) (0xc0008fe000) Stream removed, broadcasting: 1\nI0128 21:31:18.302785 835 log.go:172] (0xc000b56840) Go away received\nI0128 21:31:18.303435 835 log.go:172] (0xc000b56840) (0xc0008fe000) Stream removed, broadcasting: 1\nI0128 21:31:18.303446 835 log.go:172] (0xc000b56840) (0xc000970000) Stream removed, broadcasting: 3\nI0128 21:31:18.303452 835 log.go:172] (0xc000b56840) (0xc00091c000) Stream removed, broadcasting: 5\n" Jan 28 21:31:18.319: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 28 21:31:18.319: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 28 21:31:18.388: INFO: Found 2 stateful pods, waiting for 3 Jan 28 21:31:28.396: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 28 21:31:28.396: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 28 21:31:28.396: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 28 21:31:38.421: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 28 21:31:38.421: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 28 21:31:38.421: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Jan 28 21:31:38.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8773 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 28 21:31:38.866: INFO: stderr: "I0128 21:31:38.689174 858 log.go:172] (0xc000bab760) (0xc000bde960) Create stream\nI0128 21:31:38.689480 858 log.go:172] (0xc000bab760) (0xc000bde960) Stream added, broadcasting: 1\nI0128 21:31:38.693669 858 log.go:172] (0xc000bab760) Reply frame received for 1\nI0128 21:31:38.693782 858 log.go:172] (0xc000bab760) (0xc000b3c5a0) Create stream\nI0128 21:31:38.693799 858 log.go:172] (0xc000bab760) (0xc000b3c5a0) Stream added, broadcasting: 3\nI0128 21:31:38.695804 858 log.go:172] (0xc000bab760) Reply frame received for 3\nI0128 21:31:38.695830 858 log.go:172] (0xc000bab760) (0xc000bdea00) Create stream\nI0128 21:31:38.695842 858 log.go:172] (0xc000bab760) (0xc000bdea00) Stream added, broadcasting: 5\nI0128 21:31:38.697085 858 log.go:172] (0xc000bab760) Reply frame received for 5\nI0128 21:31:38.756909 858 log.go:172] (0xc000bab760) Data frame received for 5\nI0128 21:31:38.756942 858 log.go:172] (0xc000bdea00) (5) Data frame handling\nI0128 21:31:38.756970 858 log.go:172] (0xc000bdea00) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0128 21:31:38.758326 858 log.go:172] (0xc000bab760) Data frame received for 3\nI0128 21:31:38.758347 858 log.go:172] (0xc000b3c5a0) (3) Data frame handling\nI0128 21:31:38.758374 858 log.go:172] (0xc000b3c5a0) (3) Data frame sent\nI0128 21:31:38.838977 858 log.go:172] (0xc000bab760) (0xc000b3c5a0) Stream removed, broadcasting: 3\nI0128 21:31:38.839161 858 log.go:172] (0xc000bab760) Data frame received for 1\nI0128 21:31:38.839380 858 log.go:172] (0xc000bde960) (1) Data frame handling\nI0128 21:31:38.839439 858 log.go:172] (0xc000bde960) (1) Data frame sent\nI0128 21:31:38.839914 858 log.go:172] (0xc000bab760) (0xc000bde960) Stream removed, broadcasting: 1\nI0128 21:31:38.840248 858 log.go:172] (0xc000bab760) (0xc000bdea00) Stream removed, broadcasting: 5\nI0128 21:31:38.840533 858 log.go:172] (0xc000bab760) Go away received\nI0128 21:31:38.841588 858 log.go:172] (0xc000bab760) (0xc000bde960) Stream removed, broadcasting: 1\nI0128 21:31:38.841616 858 log.go:172] (0xc000bab760) (0xc000b3c5a0) Stream removed, broadcasting: 3\nI0128 21:31:38.841622 858 log.go:172] (0xc000bab760) (0xc000bdea00) Stream removed, broadcasting: 5\n" Jan 28 21:31:38.867: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 28 21:31:38.867: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 28 21:31:38.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8773 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 28 21:31:39.260: INFO: stderr: "I0128 21:31:39.012755 878 log.go:172] (0xc000104370) (0xc0004db5e0) Create stream\nI0128 21:31:39.012958 878 log.go:172] (0xc000104370) (0xc0004db5e0) Stream added, broadcasting: 1\nI0128 21:31:39.035837 878 log.go:172] (0xc000104370) Reply frame received for 1\nI0128 21:31:39.035887 878 log.go:172] (0xc000104370) (0xc0006bfb80) Create stream\nI0128 21:31:39.035894 878 log.go:172] (0xc000104370) (0xc0006bfb80) Stream added, broadcasting: 3\nI0128 21:31:39.036977 878 log.go:172] (0xc000104370) Reply frame received for 3\nI0128 21:31:39.036992 878 log.go:172] (0xc000104370) (0xc0009dc000) Create stream\nI0128 21:31:39.037004 878 log.go:172] (0xc000104370) (0xc0009dc000) Stream added, broadcasting: 5\nI0128 21:31:39.038044 878 log.go:172] (0xc000104370) Reply frame received for 5\nI0128 21:31:39.132576 878 log.go:172] (0xc000104370) Data frame received for 5\nI0128 21:31:39.132688 878 log.go:172] (0xc0009dc000) (5) Data frame handling\nI0128 21:31:39.132752 878 log.go:172] (0xc0009dc000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0128 21:31:39.164761 878 log.go:172] (0xc000104370) Data frame received for 3\nI0128 21:31:39.164782 878 log.go:172] (0xc0006bfb80) (3) Data frame handling\nI0128 21:31:39.164793 878 log.go:172] (0xc0006bfb80) (3) Data frame sent\nI0128 21:31:39.251328 878 log.go:172] (0xc000104370) Data frame received for 1\nI0128 21:31:39.251449 878 log.go:172] (0xc000104370) (0xc0009dc000) Stream removed, broadcasting: 5\nI0128 21:31:39.251513 878 log.go:172] (0xc0004db5e0) (1) Data frame handling\nI0128 21:31:39.251525 878 log.go:172] (0xc0004db5e0) (1) Data frame sent\nI0128 21:31:39.251552 878 log.go:172] (0xc000104370) (0xc0006bfb80) Stream removed, broadcasting: 3\nI0128 21:31:39.251567 878 log.go:172] (0xc000104370) (0xc0004db5e0) Stream removed, broadcasting: 1\nI0128 21:31:39.251581 878 log.go:172] (0xc000104370) Go away received\nI0128 21:31:39.252707 878 log.go:172] (0xc000104370) (0xc0004db5e0) Stream removed, broadcasting: 1\nI0128 21:31:39.252726 878 log.go:172] (0xc000104370) (0xc0006bfb80) Stream removed, broadcasting: 3\nI0128 21:31:39.252736 878 log.go:172] (0xc000104370) (0xc0009dc000) Stream removed, broadcasting: 5\n" Jan 28 21:31:39.260: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 28 21:31:39.260: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 28 21:31:39.261: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8773 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 28 21:31:39.646: INFO: stderr: "I0128 21:31:39.431678 899 log.go:172] (0xc000b18000) (0xc0006ae780) Create stream\nI0128 21:31:39.431935 899 log.go:172] (0xc000b18000) (0xc0006ae780) Stream added, broadcasting: 1\nI0128 21:31:39.436126 899 log.go:172] (0xc000b18000) Reply frame received for 1\nI0128 21:31:39.436221 899 log.go:172] (0xc000b18000) (0xc000519540) Create stream\nI0128 21:31:39.436237 899 log.go:172] (0xc000b18000) (0xc000519540) Stream added, broadcasting: 3\nI0128 21:31:39.437262 899 log.go:172] (0xc000b18000) Reply frame received for 3\nI0128 21:31:39.437304 899 log.go:172] (0xc000b18000) (0xc000a82000) Create stream\nI0128 21:31:39.437325 899 log.go:172] (0xc000b18000) (0xc000a82000) Stream added, broadcasting: 5\nI0128 21:31:39.438292 899 log.go:172] (0xc000b18000) Reply frame received for 5\nI0128 21:31:39.496274 899 log.go:172] (0xc000b18000) Data frame received for 5\nI0128 21:31:39.496333 899 log.go:172] (0xc000a82000) (5) Data frame handling\nI0128 21:31:39.496356 899 log.go:172] (0xc000a82000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0128 21:31:39.555313 899 log.go:172] (0xc000b18000) Data frame received for 3\nI0128 21:31:39.555371 899 log.go:172] (0xc000519540) (3) Data frame handling\nI0128 21:31:39.555395 899 log.go:172] (0xc000519540) (3) Data frame sent\nI0128 21:31:39.628222 899 log.go:172] (0xc000b18000) Data frame received for 1\nI0128 21:31:39.628425 899 log.go:172] (0xc000b18000) (0xc000a82000) Stream removed, broadcasting: 5\nI0128 21:31:39.628482 899 log.go:172] (0xc0006ae780) (1) Data frame handling\nI0128 21:31:39.628510 899 log.go:172] (0xc0006ae780) (1) Data frame sent\nI0128 21:31:39.628562 899 log.go:172] (0xc000b18000) (0xc000519540) Stream removed, broadcasting: 3\nI0128 21:31:39.628636 899 log.go:172] (0xc000b18000) (0xc0006ae780) Stream removed, broadcasting: 1\nI0128 21:31:39.628663 899 log.go:172] (0xc000b18000) Go away received\nI0128 21:31:39.630582 899 log.go:172] (0xc000b18000) (0xc0006ae780) Stream removed, broadcasting: 1\nI0128 21:31:39.630603 899 log.go:172] (0xc000b18000) (0xc000519540) Stream removed, broadcasting: 3\nI0128 21:31:39.630614 899 log.go:172] (0xc000b18000) (0xc000a82000) Stream removed, broadcasting: 5\n" Jan 28 21:31:39.646: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 28 21:31:39.646: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 28 21:31:39.646: INFO: Waiting for statefulset status.replicas updated to 0 Jan 28 21:31:39.653: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Jan 28 21:31:49.674: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 28 21:31:49.674: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jan 28 21:31:49.674: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jan 28 21:31:49.722: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999422s Jan 28 21:31:50.727: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.99272538s Jan 28 21:31:51.804: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.987406877s Jan 28 21:31:52.827: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.910032745s Jan 28 21:31:53.851: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.887165837s Jan 28 21:31:55.681: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.863535996s Jan 28 21:31:56.695: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.033298807s Jan 28 21:31:57.742: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.019015612s Jan 28 21:31:58.748: INFO: Verifying statefulset ss doesn't scale past 3 for another 972.394079ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8773 Jan 28 21:31:59.760: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8773 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 28 21:32:00.190: INFO: stderr: "I0128 21:31:59.992907 919 log.go:172] (0xc00090c630) (0xc00070dea0) Create stream\nI0128 21:31:59.993149 919 log.go:172] (0xc00090c630) (0xc00070dea0) Stream added, broadcasting: 1\nI0128 21:31:59.997412 919 log.go:172] (0xc00090c630) Reply frame received for 1\nI0128 21:31:59.997477 919 log.go:172] (0xc00090c630) (0xc000694780) Create stream\nI0128 21:31:59.997489 919 log.go:172] (0xc00090c630) (0xc000694780) Stream added, broadcasting: 3\nI0128 21:31:59.998695 919 log.go:172] (0xc00090c630) Reply frame received for 3\nI0128 21:31:59.998723 919 log.go:172] (0xc00090c630) (0xc000483540) Create stream\nI0128 21:31:59.998736 919 log.go:172] (0xc00090c630) (0xc000483540) Stream added, broadcasting: 5\nI0128 21:32:00.000012 919 log.go:172] (0xc00090c630) Reply frame received for 5\nI0128 21:32:00.078543 919 log.go:172] (0xc00090c630) Data frame received for 5\nI0128 21:32:00.078739 919 log.go:172] (0xc000483540) (5) Data frame handling\nI0128 21:32:00.078768 919 log.go:172] (0xc000483540) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0128 21:32:00.078796 919 log.go:172] (0xc00090c630) Data frame received for 3\nI0128 21:32:00.078806 919 log.go:172] (0xc000694780) (3) Data frame handling\nI0128 21:32:00.078825 919 log.go:172] (0xc000694780) (3) Data frame sent\nI0128 21:32:00.175694 919 log.go:172] (0xc00090c630) (0xc000483540) Stream removed, broadcasting: 5\nI0128 21:32:00.175810 919 log.go:172] (0xc00090c630) Data frame received for 1\nI0128 21:32:00.175849 919 log.go:172] (0xc00090c630) (0xc000694780) Stream removed, broadcasting: 3\nI0128 21:32:00.175928 919 log.go:172] (0xc00070dea0) (1) Data frame handling\nI0128 21:32:00.175949 919 log.go:172] (0xc00070dea0) (1) Data frame sent\nI0128 21:32:00.175957 919 log.go:172] (0xc00090c630) (0xc00070dea0) Stream removed, broadcasting: 1\nI0128 21:32:00.175969 919 log.go:172] (0xc00090c630) Go away received\nI0128 21:32:00.177258 919 log.go:172] (0xc00090c630) (0xc00070dea0) Stream removed, broadcasting: 1\nI0128 21:32:00.177269 919 log.go:172] (0xc00090c630) (0xc000694780) Stream removed, broadcasting: 3\nI0128 21:32:00.177273 919 log.go:172] (0xc00090c630) (0xc000483540) Stream removed, broadcasting: 5\n" Jan 28 21:32:00.191: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 28 21:32:00.191: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 28 21:32:00.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8773 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 28 21:32:00.595: INFO: stderr: "I0128 21:32:00.398371 943 log.go:172] (0xc0003dc210) (0xc00065bd60) Create stream\nI0128 21:32:00.399792 943 log.go:172] (0xc0003dc210) (0xc00065bd60) Stream added, broadcasting: 1\nI0128 21:32:00.409933 943 log.go:172] (0xc0003dc210) Reply frame received for 1\nI0128 21:32:00.410026 943 log.go:172] (0xc0003dc210) (0xc000590780) Create stream\nI0128 21:32:00.410047 943 log.go:172] (0xc0003dc210) (0xc000590780) Stream added, broadcasting: 3\nI0128 21:32:00.411247 943 log.go:172] (0xc0003dc210) Reply frame received for 3\nI0128 21:32:00.411318 943 log.go:172] (0xc0003dc210) (0xc0007c4b40) Create stream\nI0128 21:32:00.411330 943 log.go:172] (0xc0003dc210) (0xc0007c4b40) Stream added, broadcasting: 5\nI0128 21:32:00.412638 943 log.go:172] (0xc0003dc210) Reply frame received for 5\nI0128 21:32:00.473670 943 log.go:172] (0xc0003dc210) Data frame received for 5\nI0128 21:32:00.473735 943 log.go:172] (0xc0007c4b40) (5) Data frame handling\nI0128 21:32:00.473786 943 log.go:172] (0xc0007c4b40) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0128 21:32:00.475006 943 log.go:172] (0xc0003dc210) Data frame received for 3\nI0128 21:32:00.475050 943 log.go:172] (0xc000590780) (3) Data frame handling\nI0128 21:32:00.475097 943 log.go:172] (0xc000590780) (3) Data frame sent\nI0128 21:32:00.569698 943 log.go:172] (0xc0003dc210) Data frame received for 1\nI0128 21:32:00.570281 943 log.go:172] (0xc0003dc210) (0xc0007c4b40) Stream removed, broadcasting: 5\nI0128 21:32:00.570524 943 log.go:172] (0xc00065bd60) (1) Data frame handling\nI0128 21:32:00.570653 943 log.go:172] (0xc00065bd60) (1) Data frame sent\nI0128 21:32:00.571147 943 log.go:172] (0xc0003dc210) (0xc000590780) Stream removed, broadcasting: 3\nI0128 21:32:00.571541 943 log.go:172] (0xc0003dc210) (0xc00065bd60) Stream removed, broadcasting: 1\nI0128 21:32:00.571632 943 log.go:172] (0xc0003dc210) Go away received\nI0128 21:32:00.573843 943 log.go:172] (0xc0003dc210) (0xc00065bd60) Stream removed, broadcasting: 1\nI0128 21:32:00.573863 943 log.go:172] (0xc0003dc210) (0xc000590780) Stream removed, broadcasting: 3\nI0128 21:32:00.573877 943 log.go:172] (0xc0003dc210) (0xc0007c4b40) Stream removed, broadcasting: 5\n" Jan 28 21:32:00.595: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 28 21:32:00.595: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 28 21:32:00.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8773 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 28 21:32:00.902: INFO: rc: 126 Jan 28 21:32:00.903: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8773 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: cannot exec in a stopped state: unknown stderr: I0128 21:32:00.822008 965 log.go:172] (0xc0000f4580) (0xc000685ae0) Create stream I0128 21:32:00.822489 965 log.go:172] (0xc0000f4580) (0xc000685ae0) Stream added, broadcasting: 1 I0128 21:32:00.828202 965 log.go:172] (0xc0000f4580) Reply frame received for 1 I0128 21:32:00.828308 965 log.go:172] (0xc0000f4580) (0xc00092c000) Create stream I0128 21:32:00.828320 965 log.go:172] (0xc0000f4580) (0xc00092c000) Stream added, broadcasting: 3 I0128 21:32:00.830069 965 log.go:172] (0xc0000f4580) Reply frame received for 3 I0128 21:32:00.830113 965 log.go:172] (0xc0000f4580) (0xc00028c000) Create stream I0128 21:32:00.830119 965 log.go:172] (0xc0000f4580) (0xc00028c000) Stream added, broadcasting: 5 I0128 21:32:00.831885 965 log.go:172] (0xc0000f4580) Reply frame received for 5 I0128 21:32:00.872423 965 log.go:172] (0xc0000f4580) Data frame received for 3 I0128 21:32:00.872546 965 log.go:172] (0xc00092c000) (3) Data frame handling I0128 21:32:00.872601 965 log.go:172] (0xc00092c000) (3) Data frame sent I0128 21:32:00.889631 965 log.go:172] (0xc0000f4580) Data frame received for 1 I0128 21:32:00.889899 965 log.go:172] (0xc0000f4580) (0xc00028c000) Stream removed, broadcasting: 5 I0128 21:32:00.890068 965 log.go:172] (0xc000685ae0) (1) Data frame handling I0128 21:32:00.890111 965 log.go:172] (0xc000685ae0) (1) Data frame sent I0128 21:32:00.890209 965 log.go:172] (0xc0000f4580) (0xc00092c000) Stream removed, broadcasting: 3 I0128 21:32:00.890289 965 log.go:172] (0xc0000f4580) (0xc000685ae0) Stream removed, broadcasting: 1 I0128 21:32:00.890316 965 log.go:172] (0xc0000f4580) Go away received I0128 21:32:00.891743 965 log.go:172] (0xc0000f4580) (0xc000685ae0) Stream removed, broadcasting: 1 I0128 21:32:00.891872 965 log.go:172] (0xc0000f4580) (0xc00092c000) Stream removed, broadcasting: 3 I0128 21:32:00.891882 965 log.go:172] (0xc0000f4580) (0xc00028c000) Stream removed, broadcasting: 5 command terminated with exit code 126 error: exit status 126 Jan 28 21:32:10.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8773 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 28 21:32:11.213: INFO: rc: 1 Jan 28 21:32:11.214: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8773 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Jan 28 21:32:21.214: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8773 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 28 21:32:23.275: INFO: rc: 1 Jan 28 21:32:23.275: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8773 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 28 21:32:33.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8773 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 28 21:32:33.519: INFO: rc: 1 Jan 28 21:32:33.519: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8773 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 28 21:32:43.520: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8773 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 28 21:32:43.737: INFO: rc: 1 Jan 28 21:32:43.738: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8773 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 28 21:32:53.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8773 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 28 21:32:53.971: INFO: rc: 1 Jan 28 21:32:53.971: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8773 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 28 21:33:03.972: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8773 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 28 21:33:04.163: INFO: rc: 1 Jan 28 21:33:04.164: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8773 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 28 21:33:14.164: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8773 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 28 21:33:14.296: INFO: rc: 1 Jan 28 21:33:14.296: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8773 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 28 21:33:24.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8773 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 28 21:33:24.512: INFO: rc: 1 Jan 28 21:33:24.513: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8773 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 28 21:33:34.513: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8773 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 28 21:33:34.712: INFO: rc: 1 Jan 28 21:33:34.713: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8773 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 28 21:33:44.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8773 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 28 21:33:44.912: INFO: rc: 1 Jan 28 21:33:44.912: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8773 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 28 21:33:54.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8773 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 28 21:33:55.082: INFO: rc: 1 Jan 28 21:33:55.083: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8773 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 28 21:34:05.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8773 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 28 21:34:05.264: INFO: rc: 1 Jan 28 21:34:05.265: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8773 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 28 21:34:15.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8773 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 28 21:34:15.411: INFO: rc: 1 Jan 28 21:34:15.412: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8773 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 28 21:34:25.412: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8773 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 28 21:34:25.585: INFO: rc: 1 Jan 28 21:34:25.585: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8773 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 28 21:34:35.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8773 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 28 21:34:35.804: INFO: rc: 1 Jan 28 21:34:35.805: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8773 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 28 21:34:45.805: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8773 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 28 21:34:46.019: INFO: rc: 1 Jan 28 21:34:46.020: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8773 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 28 21:34:56.020: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8773 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 28 21:34:56.200: INFO: rc: 1 Jan 28 21:34:56.200: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8773 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 28 21:35:06.201: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8773 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 28 21:35:06.342: INFO: rc: 1 Jan 28 21:35:06.342: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8773 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 28 21:35:16.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8773 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 28 21:35:16.528: INFO: rc: 1 Jan 28 21:35:16.529: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8773 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 28 21:35:26.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8773 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 28 21:35:26.744: INFO: rc: 1 Jan 28 21:35:26.744: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8773 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 28 21:35:36.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8773 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 28 21:35:36.954: INFO: rc: 1 Jan 28 21:35:36.955: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8773 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 28 21:35:46.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8773 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 28 21:35:47.177: INFO: rc: 1 Jan 28 21:35:47.178: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8773 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 28 21:35:57.178: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8773 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 28 21:35:57.371: INFO: rc: 1 Jan 28 21:35:57.372: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8773 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 28 21:36:07.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8773 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 28 21:36:07.588: INFO: rc: 1 Jan 28 21:36:07.589: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8773 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 28 21:36:17.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8773 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 28 21:36:17.794: INFO: rc: 1 Jan 28 21:36:17.794: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8773 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 28 21:36:27.795: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8773 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 28 21:36:28.068: INFO: rc: 1 Jan 28 21:36:28.069: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8773 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 28 21:36:38.070: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8773 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 28 21:36:38.220: INFO: rc: 1 Jan 28 21:36:38.221: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8773 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 28 21:36:48.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8773 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 28 21:36:48.395: INFO: rc: 1 Jan 28 21:36:48.395: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8773 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 28 21:36:58.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8773 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 28 21:36:58.576: INFO: rc: 1 Jan 28 21:36:58.576: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8773 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 28 21:37:08.577: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8773 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 28 21:37:08.719: INFO: rc: 1 Jan 28 21:37:08.720: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: Jan 28 21:37:08.720: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Jan 28 21:37:08.736: INFO: Deleting all statefulset in ns statefulset-8773 Jan 28 21:37:08.739: INFO: Scaling statefulset ss to 0 Jan 28 21:37:08.748: INFO: Waiting for statefulset status.replicas updated to 0 Jan 28 21:37:08.751: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:37:08.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8773" for this suite. • [SLOW TEST:372.566 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":53,"skipped":808,"failed":0} SSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:37:08.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:37:44.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-1120" for this suite. STEP: Destroying namespace "nsdeletetest-8073" for this suite. Jan 28 21:37:44.170: INFO: Namespace nsdeletetest-8073 was already deleted STEP: Destroying namespace "nsdeletetest-6814" for this suite. • [SLOW TEST:35.388 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":54,"skipped":811,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:37:44.175: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1362 STEP: creating the pod Jan 28 21:37:44.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5680' Jan 28 21:37:44.656: INFO: stderr: "" Jan 28 21:37:44.656: INFO: stdout: "pod/pause created\n" Jan 28 21:37:44.656: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Jan 28 21:37:44.656: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-5680" to be "running and ready" Jan 28 21:37:44.662: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 5.669997ms Jan 28 21:37:46.668: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011220561s Jan 28 21:37:48.674: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01718825s Jan 28 21:37:50.749: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.093001215s Jan 28 21:37:52.756: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 8.099623364s Jan 28 21:37:52.756: INFO: Pod "pause" satisfied condition "running and ready" Jan 28 21:37:52.756: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: adding the label testing-label with value testing-label-value to a pod Jan 28 21:37:52.757: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-5680' Jan 28 21:37:52.917: INFO: stderr: "" Jan 28 21:37:52.917: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Jan 28 21:37:52.917: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-5680' Jan 28 21:37:53.079: INFO: stderr: "" Jan 28 21:37:53.079: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 9s testing-label-value\n" STEP: removing the label testing-label of a pod Jan 28 21:37:53.079: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-5680' Jan 28 21:37:53.217: INFO: stderr: "" Jan 28 21:37:53.218: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Jan 28 21:37:53.218: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-5680' Jan 28 21:37:53.360: INFO: stderr: "" Jan 28 21:37:53.360: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 9s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1369 STEP: using delete to clean up resources Jan 28 21:37:53.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5680' Jan 28 21:37:53.538: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 28 21:37:53.539: INFO: stdout: "pod \"pause\" force deleted\n" Jan 28 21:37:53.539: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-5680' Jan 28 21:37:53.711: INFO: stderr: "No resources found in kubectl-5680 namespace.\n" Jan 28 21:37:53.711: INFO: stdout: "" Jan 28 21:37:53.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-5680 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 28 21:37:53.850: INFO: stderr: "" Jan 28 21:37:53.850: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:37:53.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5680" for this suite. • [SLOW TEST:9.691 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1359 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":278,"completed":55,"skipped":823,"failed":0} SSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:37:53.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:38:05.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1661" for this suite. • [SLOW TEST:11.288 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":56,"skipped":826,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:38:05.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:38:13.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-57" for this suite. • [SLOW TEST:8.355 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":57,"skipped":833,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:38:13.512: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Jan 28 21:38:13.704: INFO: Waiting up to 5m0s for pod "pod-ecf84ce8-39bd-4dbc-8473-d799d82153a3" in namespace "emptydir-7940" to be "success or failure" Jan 28 21:38:13.725: INFO: Pod "pod-ecf84ce8-39bd-4dbc-8473-d799d82153a3": Phase="Pending", Reason="", readiness=false. Elapsed: 20.730012ms Jan 28 21:38:15.731: INFO: Pod "pod-ecf84ce8-39bd-4dbc-8473-d799d82153a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026828498s Jan 28 21:38:17.737: INFO: Pod "pod-ecf84ce8-39bd-4dbc-8473-d799d82153a3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033063465s Jan 28 21:38:19.746: INFO: Pod "pod-ecf84ce8-39bd-4dbc-8473-d799d82153a3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042152786s Jan 28 21:38:21.754: INFO: Pod "pod-ecf84ce8-39bd-4dbc-8473-d799d82153a3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.050423252s Jan 28 21:38:23.763: INFO: Pod "pod-ecf84ce8-39bd-4dbc-8473-d799d82153a3": Phase="Pending", Reason="", readiness=false. Elapsed: 10.059409058s Jan 28 21:38:25.775: INFO: Pod "pod-ecf84ce8-39bd-4dbc-8473-d799d82153a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.071205681s STEP: Saw pod success Jan 28 21:38:25.776: INFO: Pod "pod-ecf84ce8-39bd-4dbc-8473-d799d82153a3" satisfied condition "success or failure" Jan 28 21:38:25.783: INFO: Trying to get logs from node jerma-node pod pod-ecf84ce8-39bd-4dbc-8473-d799d82153a3 container test-container: STEP: delete the pod Jan 28 21:38:25.943: INFO: Waiting for pod pod-ecf84ce8-39bd-4dbc-8473-d799d82153a3 to disappear Jan 28 21:38:25.950: INFO: Pod pod-ecf84ce8-39bd-4dbc-8473-d799d82153a3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:38:25.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7940" for this suite. • [SLOW TEST:12.450 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":58,"skipped":839,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:38:25.963: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:46 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Jan 28 21:38:34.103: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Jan 28 21:38:44.243: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:38:44.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2802" for this suite. • [SLOW TEST:18.310 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":59,"skipped":896,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:38:44.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 28 21:38:44.416: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Jan 28 21:38:48.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8467 create -f -' Jan 28 21:38:50.730: INFO: stderr: "" Jan 28 21:38:50.730: INFO: stdout: "e2e-test-crd-publish-openapi-9839-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Jan 28 21:38:50.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8467 delete e2e-test-crd-publish-openapi-9839-crds test-foo' Jan 28 21:38:50.904: INFO: stderr: "" Jan 28 21:38:50.905: INFO: stdout: "e2e-test-crd-publish-openapi-9839-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Jan 28 21:38:50.905: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8467 apply -f -' Jan 28 21:38:51.336: INFO: stderr: "" Jan 28 21:38:51.337: INFO: stdout: "e2e-test-crd-publish-openapi-9839-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Jan 28 21:38:51.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8467 delete e2e-test-crd-publish-openapi-9839-crds test-foo' Jan 28 21:38:51.460: INFO: stderr: "" Jan 28 21:38:51.460: INFO: stdout: "e2e-test-crd-publish-openapi-9839-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Jan 28 21:38:51.460: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8467 create -f -' Jan 28 21:38:51.925: INFO: rc: 1 Jan 28 21:38:51.926: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8467 apply -f -' Jan 28 21:38:52.342: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Jan 28 21:38:52.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8467 create -f -' Jan 28 21:38:52.635: INFO: rc: 1 Jan 28 21:38:52.635: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8467 apply -f -' Jan 28 21:38:52.969: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Jan 28 21:38:52.970: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9839-crds' Jan 28 21:38:53.453: INFO: stderr: "" Jan 28 21:38:53.454: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9839-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Jan 28 21:38:53.454: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9839-crds.metadata' Jan 28 21:38:53.942: INFO: stderr: "" Jan 28 21:38:53.942: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9839-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Jan 28 21:38:53.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9839-crds.spec' Jan 28 21:38:54.414: INFO: stderr: "" Jan 28 21:38:54.414: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9839-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Jan 28 21:38:54.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9839-crds.spec.bars' Jan 28 21:38:54.749: INFO: stderr: "" Jan 28 21:38:54.749: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9839-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Jan 28 21:38:54.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9839-crds.spec.bars2' Jan 28 21:38:55.102: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:38:58.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8467" for this suite. • [SLOW TEST:14.402 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":60,"skipped":908,"failed":0} SSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:38:58.677: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod test-webserver-b1133fa7-d495-4b89-af9f-c83f64b5d05e in namespace container-probe-9066 Jan 28 21:39:06.778: INFO: Started pod test-webserver-b1133fa7-d495-4b89-af9f-c83f64b5d05e in namespace container-probe-9066 STEP: checking the pod's current state and verifying that restartCount is present Jan 28 21:39:06.783: INFO: Initial restart count of pod test-webserver-b1133fa7-d495-4b89-af9f-c83f64b5d05e is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:43:08.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9066" for this suite. • [SLOW TEST:249.348 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":61,"skipped":915,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:43:08.026: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jan 28 21:43:24.244: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 28 21:43:24.255: INFO: Pod pod-with-prestop-http-hook still exists Jan 28 21:43:26.256: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 28 21:43:26.262: INFO: Pod pod-with-prestop-http-hook still exists Jan 28 21:43:28.256: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 28 21:43:28.263: INFO: Pod pod-with-prestop-http-hook still exists Jan 28 21:43:30.256: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 28 21:43:30.269: INFO: Pod pod-with-prestop-http-hook still exists Jan 28 21:43:32.256: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 28 21:43:32.262: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:43:32.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-7254" for this suite. • [SLOW TEST:24.273 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":62,"skipped":935,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:43:32.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting the proxy server Jan 28 21:43:32.359: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:43:32.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1644" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":278,"completed":63,"skipped":937,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:43:32.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jan 28 21:43:43.010: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:43:43.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7468" for this suite. • [SLOW TEST:10.651 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":64,"skipped":965,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:43:43.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Jan 28 21:44:03.430: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-275 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 28 21:44:03.430: INFO: >>> kubeConfig: /root/.kube/config I0128 21:44:03.490459 8 log.go:172] (0xc002651b80) (0xc0029d32c0) Create stream I0128 21:44:03.490679 8 log.go:172] (0xc002651b80) (0xc0029d32c0) Stream added, broadcasting: 1 I0128 21:44:03.495679 8 log.go:172] (0xc002651b80) Reply frame received for 1 I0128 21:44:03.495729 8 log.go:172] (0xc002651b80) (0xc001bd8500) Create stream I0128 21:44:03.495740 8 log.go:172] (0xc002651b80) (0xc001bd8500) Stream added, broadcasting: 3 I0128 21:44:03.497347 8 log.go:172] (0xc002651b80) Reply frame received for 3 I0128 21:44:03.497376 8 log.go:172] (0xc002651b80) (0xc001bd85a0) Create stream I0128 21:44:03.497386 8 log.go:172] (0xc002651b80) (0xc001bd85a0) Stream added, broadcasting: 5 I0128 21:44:03.500035 8 log.go:172] (0xc002651b80) Reply frame received for 5 I0128 21:44:03.576069 8 log.go:172] (0xc002651b80) Data frame received for 3 I0128 21:44:03.576156 8 log.go:172] (0xc001bd8500) (3) Data frame handling I0128 21:44:03.576190 8 log.go:172] (0xc001bd8500) (3) Data frame sent I0128 21:44:03.673628 8 log.go:172] (0xc002651b80) Data frame received for 1 I0128 21:44:03.673852 8 log.go:172] (0xc002651b80) (0xc001bd85a0) Stream removed, broadcasting: 5 I0128 21:44:03.674065 8 log.go:172] (0xc0029d32c0) (1) Data frame handling I0128 21:44:03.674092 8 log.go:172] (0xc0029d32c0) (1) Data frame sent I0128 21:44:03.674353 8 log.go:172] (0xc002651b80) (0xc001bd8500) Stream removed, broadcasting: 3 I0128 21:44:03.674388 8 log.go:172] (0xc002651b80) (0xc0029d32c0) Stream removed, broadcasting: 1 I0128 21:44:03.674406 8 log.go:172] (0xc002651b80) Go away received I0128 21:44:03.675104 8 log.go:172] (0xc002651b80) (0xc0029d32c0) Stream removed, broadcasting: 1 I0128 21:44:03.675138 8 log.go:172] (0xc002651b80) (0xc001bd8500) Stream removed, broadcasting: 3 I0128 21:44:03.675155 8 log.go:172] (0xc002651b80) (0xc001bd85a0) Stream removed, broadcasting: 5 Jan 28 21:44:03.675: INFO: Exec stderr: "" Jan 28 21:44:03.675: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-275 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 28 21:44:03.675: INFO: >>> kubeConfig: /root/.kube/config I0128 21:44:03.725525 8 log.go:172] (0xc000ffcbb0) (0xc0014220a0) Create stream I0128 21:44:03.725845 8 log.go:172] (0xc000ffcbb0) (0xc0014220a0) Stream added, broadcasting: 1 I0128 21:44:03.730344 8 log.go:172] (0xc000ffcbb0) Reply frame received for 1 I0128 21:44:03.730419 8 log.go:172] (0xc000ffcbb0) (0xc0029d3360) Create stream I0128 21:44:03.730431 8 log.go:172] (0xc000ffcbb0) (0xc0029d3360) Stream added, broadcasting: 3 I0128 21:44:03.731567 8 log.go:172] (0xc000ffcbb0) Reply frame received for 3 I0128 21:44:03.731717 8 log.go:172] (0xc000ffcbb0) (0xc001ee0460) Create stream I0128 21:44:03.731746 8 log.go:172] (0xc000ffcbb0) (0xc001ee0460) Stream added, broadcasting: 5 I0128 21:44:03.735322 8 log.go:172] (0xc000ffcbb0) Reply frame received for 5 I0128 21:44:03.805454 8 log.go:172] (0xc000ffcbb0) Data frame received for 3 I0128 21:44:03.805716 8 log.go:172] (0xc0029d3360) (3) Data frame handling I0128 21:44:03.805776 8 log.go:172] (0xc0029d3360) (3) Data frame sent I0128 21:44:03.918886 8 log.go:172] (0xc000ffcbb0) (0xc0029d3360) Stream removed, broadcasting: 3 I0128 21:44:03.919194 8 log.go:172] (0xc000ffcbb0) Data frame received for 1 I0128 21:44:03.919268 8 log.go:172] (0xc0014220a0) (1) Data frame handling I0128 21:44:03.919305 8 log.go:172] (0xc0014220a0) (1) Data frame sent I0128 21:44:03.919343 8 log.go:172] (0xc000ffcbb0) (0xc001ee0460) Stream removed, broadcasting: 5 I0128 21:44:03.919403 8 log.go:172] (0xc000ffcbb0) (0xc0014220a0) Stream removed, broadcasting: 1 I0128 21:44:03.919498 8 log.go:172] (0xc000ffcbb0) Go away received I0128 21:44:03.919908 8 log.go:172] (0xc000ffcbb0) (0xc0014220a0) Stream removed, broadcasting: 1 I0128 21:44:03.919970 8 log.go:172] (0xc000ffcbb0) (0xc0029d3360) Stream removed, broadcasting: 3 I0128 21:44:03.919983 8 log.go:172] (0xc000ffcbb0) (0xc001ee0460) Stream removed, broadcasting: 5 Jan 28 21:44:03.920: INFO: Exec stderr: "" Jan 28 21:44:03.920: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-275 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 28 21:44:03.920: INFO: >>> kubeConfig: /root/.kube/config I0128 21:44:03.982066 8 log.go:172] (0xc0029dc2c0) (0xc001ee06e0) Create stream I0128 21:44:03.982476 8 log.go:172] (0xc0029dc2c0) (0xc001ee06e0) Stream added, broadcasting: 1 I0128 21:44:03.992915 8 log.go:172] (0xc0029dc2c0) Reply frame received for 1 I0128 21:44:03.993174 8 log.go:172] (0xc0029dc2c0) (0xc001ee0780) Create stream I0128 21:44:03.993206 8 log.go:172] (0xc0029dc2c0) (0xc001ee0780) Stream added, broadcasting: 3 I0128 21:44:03.999209 8 log.go:172] (0xc0029dc2c0) Reply frame received for 3 I0128 21:44:03.999767 8 log.go:172] (0xc0029dc2c0) (0xc001422140) Create stream I0128 21:44:03.999838 8 log.go:172] (0xc0029dc2c0) (0xc001422140) Stream added, broadcasting: 5 I0128 21:44:04.002196 8 log.go:172] (0xc0029dc2c0) Reply frame received for 5 I0128 21:44:04.080167 8 log.go:172] (0xc0029dc2c0) Data frame received for 3 I0128 21:44:04.080492 8 log.go:172] (0xc001ee0780) (3) Data frame handling I0128 21:44:04.080519 8 log.go:172] (0xc001ee0780) (3) Data frame sent I0128 21:44:04.178854 8 log.go:172] (0xc0029dc2c0) Data frame received for 1 I0128 21:44:04.179166 8 log.go:172] (0xc0029dc2c0) (0xc001ee0780) Stream removed, broadcasting: 3 I0128 21:44:04.179375 8 log.go:172] (0xc001ee06e0) (1) Data frame handling I0128 21:44:04.179437 8 log.go:172] (0xc001ee06e0) (1) Data frame sent I0128 21:44:04.179477 8 log.go:172] (0xc0029dc2c0) (0xc001422140) Stream removed, broadcasting: 5 I0128 21:44:04.179518 8 log.go:172] (0xc0029dc2c0) (0xc001ee06e0) Stream removed, broadcasting: 1 I0128 21:44:04.179620 8 log.go:172] (0xc0029dc2c0) Go away received I0128 21:44:04.180262 8 log.go:172] (0xc0029dc2c0) (0xc001ee06e0) Stream removed, broadcasting: 1 I0128 21:44:04.180306 8 log.go:172] (0xc0029dc2c0) (0xc001ee0780) Stream removed, broadcasting: 3 I0128 21:44:04.180322 8 log.go:172] (0xc0029dc2c0) (0xc001422140) Stream removed, broadcasting: 5 Jan 28 21:44:04.180: INFO: Exec stderr: "" Jan 28 21:44:04.180: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-275 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 28 21:44:04.181: INFO: >>> kubeConfig: /root/.kube/config I0128 21:44:04.230848 8 log.go:172] (0xc001f32160) (0xc0017783c0) Create stream I0128 21:44:04.231235 8 log.go:172] (0xc001f32160) (0xc0017783c0) Stream added, broadcasting: 1 I0128 21:44:04.240412 8 log.go:172] (0xc001f32160) Reply frame received for 1 I0128 21:44:04.241134 8 log.go:172] (0xc001f32160) (0xc0029d3400) Create stream I0128 21:44:04.241177 8 log.go:172] (0xc001f32160) (0xc0029d3400) Stream added, broadcasting: 3 I0128 21:44:04.244002 8 log.go:172] (0xc001f32160) Reply frame received for 3 I0128 21:44:04.244113 8 log.go:172] (0xc001f32160) (0xc001ee0960) Create stream I0128 21:44:04.244138 8 log.go:172] (0xc001f32160) (0xc001ee0960) Stream added, broadcasting: 5 I0128 21:44:04.245320 8 log.go:172] (0xc001f32160) Reply frame received for 5 I0128 21:44:04.352449 8 log.go:172] (0xc001f32160) Data frame received for 3 I0128 21:44:04.352943 8 log.go:172] (0xc0029d3400) (3) Data frame handling I0128 21:44:04.352987 8 log.go:172] (0xc0029d3400) (3) Data frame sent I0128 21:44:04.490188 8 log.go:172] (0xc001f32160) (0xc001ee0960) Stream removed, broadcasting: 5 I0128 21:44:04.490643 8 log.go:172] (0xc001f32160) Data frame received for 1 I0128 21:44:04.490682 8 log.go:172] (0xc001f32160) (0xc0029d3400) Stream removed, broadcasting: 3 I0128 21:44:04.490727 8 log.go:172] (0xc0017783c0) (1) Data frame handling I0128 21:44:04.490783 8 log.go:172] (0xc0017783c0) (1) Data frame sent I0128 21:44:04.490800 8 log.go:172] (0xc001f32160) (0xc0017783c0) Stream removed, broadcasting: 1 I0128 21:44:04.490833 8 log.go:172] (0xc001f32160) Go away received I0128 21:44:04.491497 8 log.go:172] (0xc001f32160) (0xc0017783c0) Stream removed, broadcasting: 1 I0128 21:44:04.491509 8 log.go:172] (0xc001f32160) (0xc0029d3400) Stream removed, broadcasting: 3 I0128 21:44:04.491519 8 log.go:172] (0xc001f32160) (0xc001ee0960) Stream removed, broadcasting: 5 Jan 28 21:44:04.491: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Jan 28 21:44:04.491: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-275 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 28 21:44:04.491: INFO: >>> kubeConfig: /root/.kube/config I0128 21:44:04.540383 8 log.go:172] (0xc001f32790) (0xc001778820) Create stream I0128 21:44:04.540674 8 log.go:172] (0xc001f32790) (0xc001778820) Stream added, broadcasting: 1 I0128 21:44:04.546725 8 log.go:172] (0xc001f32790) Reply frame received for 1 I0128 21:44:04.546893 8 log.go:172] (0xc001f32790) (0xc0014221e0) Create stream I0128 21:44:04.546917 8 log.go:172] (0xc001f32790) (0xc0014221e0) Stream added, broadcasting: 3 I0128 21:44:04.551396 8 log.go:172] (0xc001f32790) Reply frame received for 3 I0128 21:44:04.551447 8 log.go:172] (0xc001f32790) (0xc0017788c0) Create stream I0128 21:44:04.551466 8 log.go:172] (0xc001f32790) (0xc0017788c0) Stream added, broadcasting: 5 I0128 21:44:04.552810 8 log.go:172] (0xc001f32790) Reply frame received for 5 I0128 21:44:04.644122 8 log.go:172] (0xc001f32790) Data frame received for 3 I0128 21:44:04.644441 8 log.go:172] (0xc0014221e0) (3) Data frame handling I0128 21:44:04.644521 8 log.go:172] (0xc0014221e0) (3) Data frame sent I0128 21:44:04.731389 8 log.go:172] (0xc001f32790) Data frame received for 1 I0128 21:44:04.731513 8 log.go:172] (0xc001778820) (1) Data frame handling I0128 21:44:04.731584 8 log.go:172] (0xc001778820) (1) Data frame sent I0128 21:44:04.731984 8 log.go:172] (0xc001f32790) (0xc0014221e0) Stream removed, broadcasting: 3 I0128 21:44:04.732068 8 log.go:172] (0xc001f32790) (0xc001778820) Stream removed, broadcasting: 1 I0128 21:44:04.732438 8 log.go:172] (0xc001f32790) (0xc0017788c0) Stream removed, broadcasting: 5 I0128 21:44:04.732539 8 log.go:172] (0xc001f32790) (0xc001778820) Stream removed, broadcasting: 1 I0128 21:44:04.732550 8 log.go:172] (0xc001f32790) (0xc0014221e0) Stream removed, broadcasting: 3 I0128 21:44:04.732558 8 log.go:172] (0xc001f32790) (0xc0017788c0) Stream removed, broadcasting: 5 I0128 21:44:04.732622 8 log.go:172] (0xc001f32790) Go away received Jan 28 21:44:04.733: INFO: Exec stderr: "" Jan 28 21:44:04.733: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-275 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 28 21:44:04.733: INFO: >>> kubeConfig: /root/.kube/config I0128 21:44:04.769335 8 log.go:172] (0xc001f32d10) (0xc001778aa0) Create stream I0128 21:44:04.769571 8 log.go:172] (0xc001f32d10) (0xc001778aa0) Stream added, broadcasting: 1 I0128 21:44:04.773835 8 log.go:172] (0xc001f32d10) Reply frame received for 1 I0128 21:44:04.773863 8 log.go:172] (0xc001f32d10) (0xc001422640) Create stream I0128 21:44:04.773870 8 log.go:172] (0xc001f32d10) (0xc001422640) Stream added, broadcasting: 3 I0128 21:44:04.774837 8 log.go:172] (0xc001f32d10) Reply frame received for 3 I0128 21:44:04.774857 8 log.go:172] (0xc001f32d10) (0xc001ee0a00) Create stream I0128 21:44:04.774865 8 log.go:172] (0xc001f32d10) (0xc001ee0a00) Stream added, broadcasting: 5 I0128 21:44:04.775832 8 log.go:172] (0xc001f32d10) Reply frame received for 5 I0128 21:44:04.841112 8 log.go:172] (0xc001f32d10) Data frame received for 3 I0128 21:44:04.841177 8 log.go:172] (0xc001422640) (3) Data frame handling I0128 21:44:04.841195 8 log.go:172] (0xc001422640) (3) Data frame sent I0128 21:44:04.916952 8 log.go:172] (0xc001f32d10) Data frame received for 1 I0128 21:44:04.917047 8 log.go:172] (0xc001778aa0) (1) Data frame handling I0128 21:44:04.917066 8 log.go:172] (0xc001778aa0) (1) Data frame sent I0128 21:44:04.917087 8 log.go:172] (0xc001f32d10) (0xc001778aa0) Stream removed, broadcasting: 1 I0128 21:44:04.919788 8 log.go:172] (0xc001f32d10) (0xc001422640) Stream removed, broadcasting: 3 I0128 21:44:04.919861 8 log.go:172] (0xc001f32d10) (0xc001ee0a00) Stream removed, broadcasting: 5 I0128 21:44:04.919912 8 log.go:172] (0xc001f32d10) Go away received I0128 21:44:04.919947 8 log.go:172] (0xc001f32d10) (0xc001778aa0) Stream removed, broadcasting: 1 I0128 21:44:04.919978 8 log.go:172] (0xc001f32d10) (0xc001422640) Stream removed, broadcasting: 3 I0128 21:44:04.919991 8 log.go:172] (0xc001f32d10) (0xc001ee0a00) Stream removed, broadcasting: 5 Jan 28 21:44:04.920: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Jan 28 21:44:04.920: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-275 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 28 21:44:04.920: INFO: >>> kubeConfig: /root/.kube/config I0128 21:44:04.966332 8 log.go:172] (0xc000ffd340) (0xc001422b40) Create stream I0128 21:44:04.966521 8 log.go:172] (0xc000ffd340) (0xc001422b40) Stream added, broadcasting: 1 I0128 21:44:04.971429 8 log.go:172] (0xc000ffd340) Reply frame received for 1 I0128 21:44:04.971522 8 log.go:172] (0xc000ffd340) (0xc001ee0aa0) Create stream I0128 21:44:04.971538 8 log.go:172] (0xc000ffd340) (0xc001ee0aa0) Stream added, broadcasting: 3 I0128 21:44:04.972745 8 log.go:172] (0xc000ffd340) Reply frame received for 3 I0128 21:44:04.972782 8 log.go:172] (0xc000ffd340) (0xc001779040) Create stream I0128 21:44:04.972793 8 log.go:172] (0xc000ffd340) (0xc001779040) Stream added, broadcasting: 5 I0128 21:44:04.974212 8 log.go:172] (0xc000ffd340) Reply frame received for 5 I0128 21:44:05.059412 8 log.go:172] (0xc000ffd340) Data frame received for 3 I0128 21:44:05.059494 8 log.go:172] (0xc001ee0aa0) (3) Data frame handling I0128 21:44:05.059513 8 log.go:172] (0xc001ee0aa0) (3) Data frame sent I0128 21:44:05.166273 8 log.go:172] (0xc000ffd340) Data frame received for 1 I0128 21:44:05.166378 8 log.go:172] (0xc000ffd340) (0xc001779040) Stream removed, broadcasting: 5 I0128 21:44:05.166426 8 log.go:172] (0xc001422b40) (1) Data frame handling I0128 21:44:05.166455 8 log.go:172] (0xc001422b40) (1) Data frame sent I0128 21:44:05.166479 8 log.go:172] (0xc000ffd340) (0xc001ee0aa0) Stream removed, broadcasting: 3 I0128 21:44:05.166508 8 log.go:172] (0xc000ffd340) (0xc001422b40) Stream removed, broadcasting: 1 I0128 21:44:05.166539 8 log.go:172] (0xc000ffd340) Go away received I0128 21:44:05.167013 8 log.go:172] (0xc000ffd340) (0xc001422b40) Stream removed, broadcasting: 1 I0128 21:44:05.167044 8 log.go:172] (0xc000ffd340) (0xc001ee0aa0) Stream removed, broadcasting: 3 I0128 21:44:05.167063 8 log.go:172] (0xc000ffd340) (0xc001779040) Stream removed, broadcasting: 5 Jan 28 21:44:05.167: INFO: Exec stderr: "" Jan 28 21:44:05.167: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-275 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 28 21:44:05.167: INFO: >>> kubeConfig: /root/.kube/config I0128 21:44:05.205941 8 log.go:172] (0xc001f333f0) (0xc001779220) Create stream I0128 21:44:05.206092 8 log.go:172] (0xc001f333f0) (0xc001779220) Stream added, broadcasting: 1 I0128 21:44:05.211406 8 log.go:172] (0xc001f333f0) Reply frame received for 1 I0128 21:44:05.211477 8 log.go:172] (0xc001f333f0) (0xc0029d34a0) Create stream I0128 21:44:05.211493 8 log.go:172] (0xc001f333f0) (0xc0029d34a0) Stream added, broadcasting: 3 I0128 21:44:05.213325 8 log.go:172] (0xc001f333f0) Reply frame received for 3 I0128 21:44:05.213344 8 log.go:172] (0xc001f333f0) (0xc001422be0) Create stream I0128 21:44:05.213353 8 log.go:172] (0xc001f333f0) (0xc001422be0) Stream added, broadcasting: 5 I0128 21:44:05.215688 8 log.go:172] (0xc001f333f0) Reply frame received for 5 I0128 21:44:05.291603 8 log.go:172] (0xc001f333f0) Data frame received for 3 I0128 21:44:05.291709 8 log.go:172] (0xc0029d34a0) (3) Data frame handling I0128 21:44:05.291733 8 log.go:172] (0xc0029d34a0) (3) Data frame sent I0128 21:44:05.375697 8 log.go:172] (0xc001f333f0) (0xc0029d34a0) Stream removed, broadcasting: 3 I0128 21:44:05.375954 8 log.go:172] (0xc001f333f0) Data frame received for 1 I0128 21:44:05.375980 8 log.go:172] (0xc001779220) (1) Data frame handling I0128 21:44:05.376505 8 log.go:172] (0xc001779220) (1) Data frame sent I0128 21:44:05.376593 8 log.go:172] (0xc001f333f0) (0xc001422be0) Stream removed, broadcasting: 5 I0128 21:44:05.376888 8 log.go:172] (0xc001f333f0) (0xc001779220) Stream removed, broadcasting: 1 I0128 21:44:05.376971 8 log.go:172] (0xc001f333f0) Go away received I0128 21:44:05.378329 8 log.go:172] (0xc001f333f0) (0xc001779220) Stream removed, broadcasting: 1 I0128 21:44:05.378471 8 log.go:172] (0xc001f333f0) (0xc0029d34a0) Stream removed, broadcasting: 3 I0128 21:44:05.378519 8 log.go:172] (0xc001f333f0) (0xc001422be0) Stream removed, broadcasting: 5 Jan 28 21:44:05.378: INFO: Exec stderr: "" Jan 28 21:44:05.379: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-275 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 28 21:44:05.379: INFO: >>> kubeConfig: /root/.kube/config I0128 21:44:05.432740 8 log.go:172] (0xc0023ce370) (0xc0029781e0) Create stream I0128 21:44:05.432842 8 log.go:172] (0xc0023ce370) (0xc0029781e0) Stream added, broadcasting: 1 I0128 21:44:05.439638 8 log.go:172] (0xc0023ce370) Reply frame received for 1 I0128 21:44:05.439752 8 log.go:172] (0xc0023ce370) (0xc0017792c0) Create stream I0128 21:44:05.439767 8 log.go:172] (0xc0023ce370) (0xc0017792c0) Stream added, broadcasting: 3 I0128 21:44:05.441782 8 log.go:172] (0xc0023ce370) Reply frame received for 3 I0128 21:44:05.441821 8 log.go:172] (0xc0023ce370) (0xc001422c80) Create stream I0128 21:44:05.441829 8 log.go:172] (0xc0023ce370) (0xc001422c80) Stream added, broadcasting: 5 I0128 21:44:05.443806 8 log.go:172] (0xc0023ce370) Reply frame received for 5 I0128 21:44:05.534528 8 log.go:172] (0xc0023ce370) Data frame received for 3 I0128 21:44:05.534651 8 log.go:172] (0xc0017792c0) (3) Data frame handling I0128 21:44:05.534670 8 log.go:172] (0xc0017792c0) (3) Data frame sent I0128 21:44:05.601265 8 log.go:172] (0xc0023ce370) Data frame received for 1 I0128 21:44:05.601311 8 log.go:172] (0xc0029781e0) (1) Data frame handling I0128 21:44:05.601335 8 log.go:172] (0xc0029781e0) (1) Data frame sent I0128 21:44:05.601364 8 log.go:172] (0xc0023ce370) (0xc0029781e0) Stream removed, broadcasting: 1 I0128 21:44:05.601390 8 log.go:172] (0xc0023ce370) (0xc001422c80) Stream removed, broadcasting: 5 I0128 21:44:05.601447 8 log.go:172] (0xc0023ce370) (0xc0017792c0) Stream removed, broadcasting: 3 I0128 21:44:05.601465 8 log.go:172] (0xc0023ce370) Go away received I0128 21:44:05.601612 8 log.go:172] (0xc0023ce370) (0xc0029781e0) Stream removed, broadcasting: 1 I0128 21:44:05.601694 8 log.go:172] (0xc0023ce370) (0xc0017792c0) Stream removed, broadcasting: 3 I0128 21:44:05.601736 8 log.go:172] (0xc0023ce370) (0xc001422c80) Stream removed, broadcasting: 5 Jan 28 21:44:05.601: INFO: Exec stderr: "" Jan 28 21:44:05.602: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-275 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 28 21:44:05.602: INFO: >>> kubeConfig: /root/.kube/config I0128 21:44:05.664307 8 log.go:172] (0xc001f33a20) (0xc001779720) Create stream I0128 21:44:05.664396 8 log.go:172] (0xc001f33a20) (0xc001779720) Stream added, broadcasting: 1 I0128 21:44:05.671642 8 log.go:172] (0xc001f33a20) Reply frame received for 1 I0128 21:44:05.671729 8 log.go:172] (0xc001f33a20) (0xc001ee0be0) Create stream I0128 21:44:05.671744 8 log.go:172] (0xc001f33a20) (0xc001ee0be0) Stream added, broadcasting: 3 I0128 21:44:05.673129 8 log.go:172] (0xc001f33a20) Reply frame received for 3 I0128 21:44:05.673156 8 log.go:172] (0xc001f33a20) (0xc001779860) Create stream I0128 21:44:05.673162 8 log.go:172] (0xc001f33a20) (0xc001779860) Stream added, broadcasting: 5 I0128 21:44:05.674313 8 log.go:172] (0xc001f33a20) Reply frame received for 5 I0128 21:44:05.753961 8 log.go:172] (0xc001f33a20) Data frame received for 3 I0128 21:44:05.754138 8 log.go:172] (0xc001ee0be0) (3) Data frame handling I0128 21:44:05.754178 8 log.go:172] (0xc001ee0be0) (3) Data frame sent I0128 21:44:05.835124 8 log.go:172] (0xc001f33a20) Data frame received for 1 I0128 21:44:05.835272 8 log.go:172] (0xc001f33a20) (0xc001ee0be0) Stream removed, broadcasting: 3 I0128 21:44:05.835349 8 log.go:172] (0xc001779720) (1) Data frame handling I0128 21:44:05.835373 8 log.go:172] (0xc001779720) (1) Data frame sent I0128 21:44:05.835417 8 log.go:172] (0xc001f33a20) (0xc001779860) Stream removed, broadcasting: 5 I0128 21:44:05.835446 8 log.go:172] (0xc001f33a20) (0xc001779720) Stream removed, broadcasting: 1 I0128 21:44:05.835466 8 log.go:172] (0xc001f33a20) Go away received I0128 21:44:05.835723 8 log.go:172] (0xc001f33a20) (0xc001779720) Stream removed, broadcasting: 1 I0128 21:44:05.835742 8 log.go:172] (0xc001f33a20) (0xc001ee0be0) Stream removed, broadcasting: 3 I0128 21:44:05.835754 8 log.go:172] (0xc001f33a20) (0xc001779860) Stream removed, broadcasting: 5 Jan 28 21:44:05.835: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:44:05.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-275" for this suite. • [SLOW TEST:22.690 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":65,"skipped":1003,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:44:05.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0128 21:44:17.757579 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 28 21:44:17.757: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:44:17.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7863" for this suite. • [SLOW TEST:11.951 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":66,"skipped":1018,"failed":0} SS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:44:17.797: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Jan 28 21:44:17.885: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:44:43.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7402" for this suite. • [SLOW TEST:25.333 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":67,"skipped":1020,"failed":0} SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:44:43.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-35 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet Jan 28 21:44:43.550: INFO: Found 0 stateful pods, waiting for 3 Jan 28 21:44:53.567: INFO: Found 2 stateful pods, waiting for 3 Jan 28 21:45:03.564: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 28 21:45:03.564: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 28 21:45:03.564: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 28 21:45:13.559: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 28 21:45:13.559: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 28 21:45:13.559: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Jan 28 21:45:13.595: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Jan 28 21:45:23.643: INFO: Updating stateful set ss2 Jan 28 21:45:23.654: INFO: Waiting for Pod statefulset-35/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Jan 28 21:45:36.917: INFO: Found 2 stateful pods, waiting for 3 Jan 28 21:45:46.925: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 28 21:45:46.925: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 28 21:45:46.925: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 28 21:45:56.927: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 28 21:45:56.928: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 28 21:45:56.928: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Jan 28 21:45:57.062: INFO: Updating stateful set ss2 Jan 28 21:45:57.084: INFO: Waiting for Pod statefulset-35/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 28 21:46:07.102: INFO: Waiting for Pod statefulset-35/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 28 21:46:17.137: INFO: Updating stateful set ss2 Jan 28 21:46:17.219: INFO: Waiting for StatefulSet statefulset-35/ss2 to complete update Jan 28 21:46:17.219: INFO: Waiting for Pod statefulset-35/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 28 21:46:27.227: INFO: Waiting for StatefulSet statefulset-35/ss2 to complete update Jan 28 21:46:27.227: INFO: Waiting for Pod statefulset-35/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 28 21:46:38.193: INFO: Waiting for StatefulSet statefulset-35/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Jan 28 21:46:47.273: INFO: Deleting all statefulset in ns statefulset-35 Jan 28 21:46:47.276: INFO: Scaling statefulset ss2 to 0 Jan 28 21:47:17.302: INFO: Waiting for statefulset status.replicas updated to 0 Jan 28 21:47:17.307: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:47:17.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-35" for this suite. • [SLOW TEST:154.223 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":68,"skipped":1022,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:47:17.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Starting the proxy Jan 28 21:47:17.466: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix370306532/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:47:17.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-286" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":278,"completed":69,"skipped":1064,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:47:17.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Jan 28 21:47:17.816: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:47:35.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4591" for this suite. • [SLOW TEST:18.066 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":70,"skipped":1106,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:47:35.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 28 21:47:36.655: INFO: Creating deployment "test-recreate-deployment" Jan 28 21:47:36.661: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Jan 28 21:47:36.741: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Jan 28 21:47:38.756: INFO: Waiting deployment "test-recreate-deployment" to complete Jan 28 21:47:38.761: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715844856, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715844856, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715844856, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715844856, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 28 21:47:40.766: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715844856, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715844856, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715844856, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715844856, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 28 21:47:42.766: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715844856, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715844856, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715844856, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715844856, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 28 21:47:44.766: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Jan 28 21:47:44.775: INFO: Updating deployment test-recreate-deployment Jan 28 21:47:44.775: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Jan 28 21:47:45.068: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-6775 /apis/apps/v1/namespaces/deployment-6775/deployments/test-recreate-deployment 423045b2-55af-4566-b09b-23927abeae99 4966035 2 2020-01-28 21:47:36 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00303b278 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-01-28 21:47:45 +0000 UTC,LastTransitionTime:2020-01-28 21:47:45 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-01-28 21:47:45 +0000 UTC,LastTransitionTime:2020-01-28 21:47:36 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Jan 28 21:47:45.080: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-6775 /apis/apps/v1/namespaces/deployment-6775/replicasets/test-recreate-deployment-5f94c574ff 4aad49eb-8b26-4f07-91f6-bce664d91623 4966033 1 2020-01-28 21:47:44 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 423045b2-55af-4566-b09b-23927abeae99 0xc00303b607 0xc00303b608}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00303b668 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 28 21:47:45.080: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Jan 28 21:47:45.080: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856 deployment-6775 /apis/apps/v1/namespaces/deployment-6775/replicasets/test-recreate-deployment-799c574856 13bd4549-7249-4eb0-97fe-50da64d5a6f8 4966025 2 2020-01-28 21:47:36 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 423045b2-55af-4566-b09b-23927abeae99 0xc00303b6d7 0xc00303b6d8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00303b748 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 28 21:47:45.142: INFO: Pod "test-recreate-deployment-5f94c574ff-5mvwq" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-5mvwq test-recreate-deployment-5f94c574ff- deployment-6775 /api/v1/namespaces/deployment-6775/pods/test-recreate-deployment-5f94c574ff-5mvwq 2252f85a-d686-4517-8901-357ab0e8167b 4966036 0 2020-01-28 21:47:44 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 4aad49eb-8b26-4f07-91f6-bce664d91623 0xc002e964c7 0xc002e964c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k958z,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k958z,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k958z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 21:47:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 21:47:45 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 21:47:45 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 21:47:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-28 21:47:45 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:47:45.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6775" for this suite. • [SLOW TEST:9.507 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":71,"skipped":1144,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:47:45.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Jan 28 21:47:45.523: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-2050 /api/v1/namespaces/watch-2050/configmaps/e2e-watch-test-resource-version 53fe01e8-271b-42e4-b282-0c61300dff30 4966049 0 2020-01-28 21:47:45 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 28 21:47:45.524: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-2050 /api/v1/namespaces/watch-2050/configmaps/e2e-watch-test-resource-version 53fe01e8-271b-42e4-b282-0c61300dff30 4966050 0 2020-01-28 21:47:45 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:47:45.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2050" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":72,"skipped":1169,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:47:45.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-3d4f58ca-9008-40a9-9733-77ad4cfc2707 in namespace container-probe-4886 Jan 28 21:47:57.976: INFO: Started pod liveness-3d4f58ca-9008-40a9-9733-77ad4cfc2707 in namespace container-probe-4886 STEP: checking the pod's current state and verifying that restartCount is present Jan 28 21:47:57.980: INFO: Initial restart count of pod liveness-3d4f58ca-9008-40a9-9733-77ad4cfc2707 is 0 Jan 28 21:48:14.107: INFO: Restart count of pod container-probe-4886/liveness-3d4f58ca-9008-40a9-9733-77ad4cfc2707 is now 1 (16.127409682s elapsed) Jan 28 21:48:34.201: INFO: Restart count of pod container-probe-4886/liveness-3d4f58ca-9008-40a9-9733-77ad4cfc2707 is now 2 (36.220818045s elapsed) Jan 28 21:48:52.274: INFO: Restart count of pod container-probe-4886/liveness-3d4f58ca-9008-40a9-9733-77ad4cfc2707 is now 3 (54.294455218s elapsed) Jan 28 21:49:12.365: INFO: Restart count of pod container-probe-4886/liveness-3d4f58ca-9008-40a9-9733-77ad4cfc2707 is now 4 (1m14.385604371s elapsed) Jan 28 21:50:15.268: INFO: Restart count of pod container-probe-4886/liveness-3d4f58ca-9008-40a9-9733-77ad4cfc2707 is now 5 (2m17.28795501s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:50:15.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4886" for this suite. • [SLOW TEST:149.741 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":73,"skipped":1189,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:50:15.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service multi-endpoint-test in namespace services-3993 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3993 to expose endpoints map[] Jan 28 21:50:15.603: INFO: Get endpoints failed (6.620706ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Jan 28 21:50:16.614: INFO: successfully validated that service multi-endpoint-test in namespace services-3993 exposes endpoints map[] (1.017591396s elapsed) STEP: Creating pod pod1 in namespace services-3993 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3993 to expose endpoints map[pod1:[100]] Jan 28 21:50:20.794: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.138308705s elapsed, will retry) Jan 28 21:50:25.862: INFO: successfully validated that service multi-endpoint-test in namespace services-3993 exposes endpoints map[pod1:[100]] (9.206080106s elapsed) STEP: Creating pod pod2 in namespace services-3993 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3993 to expose endpoints map[pod1:[100] pod2:[101]] Jan 28 21:50:30.804: INFO: Unexpected endpoints: found map[31fdcd37-fccc-4079-834b-e7528041cbfe:[100]], expected map[pod1:[100] pod2:[101]] (4.931144697s elapsed, will retry) Jan 28 21:50:32.908: INFO: successfully validated that service multi-endpoint-test in namespace services-3993 exposes endpoints map[pod1:[100] pod2:[101]] (7.035506229s elapsed) STEP: Deleting pod pod1 in namespace services-3993 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3993 to expose endpoints map[pod2:[101]] Jan 28 21:50:33.978: INFO: successfully validated that service multi-endpoint-test in namespace services-3993 exposes endpoints map[pod2:[101]] (1.061891086s elapsed) STEP: Deleting pod pod2 in namespace services-3993 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3993 to expose endpoints map[] Jan 28 21:50:35.013: INFO: successfully validated that service multi-endpoint-test in namespace services-3993 exposes endpoints map[] (1.026777692s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:50:36.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3993" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:20.894 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":278,"completed":74,"skipped":1223,"failed":0} SSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:50:36.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token STEP: reading a file in the container Jan 28 21:50:47.762: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6807 pod-service-account-493f6f0d-b5d8-4b0c-89f9-7a1596d5a6ce -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Jan 28 21:50:50.649: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6807 pod-service-account-493f6f0d-b5d8-4b0c-89f9-7a1596d5a6ce -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Jan 28 21:50:50.957: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6807 pod-service-account-493f6f0d-b5d8-4b0c-89f9-7a1596d5a6ce -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:50:51.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-6807" for this suite. • [SLOW TEST:15.118 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":278,"completed":75,"skipped":1230,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:50:51.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-e65e5137-6a7c-4300-a7e9-41eb5bae69e9 STEP: Creating a pod to test consume configMaps Jan 28 21:50:51.418: INFO: Waiting up to 5m0s for pod "pod-configmaps-37fc535e-f450-459e-b3a0-0d2f477b4f2e" in namespace "configmap-8402" to be "success or failure" Jan 28 21:50:51.431: INFO: Pod "pod-configmaps-37fc535e-f450-459e-b3a0-0d2f477b4f2e": Phase="Pending", Reason="", readiness=false. Elapsed: 11.976164ms Jan 28 21:50:53.439: INFO: Pod "pod-configmaps-37fc535e-f450-459e-b3a0-0d2f477b4f2e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020725754s Jan 28 21:50:55.446: INFO: Pod "pod-configmaps-37fc535e-f450-459e-b3a0-0d2f477b4f2e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027916806s Jan 28 21:50:57.497: INFO: Pod "pod-configmaps-37fc535e-f450-459e-b3a0-0d2f477b4f2e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.078485412s Jan 28 21:50:59.503: INFO: Pod "pod-configmaps-37fc535e-f450-459e-b3a0-0d2f477b4f2e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.084759179s STEP: Saw pod success Jan 28 21:50:59.503: INFO: Pod "pod-configmaps-37fc535e-f450-459e-b3a0-0d2f477b4f2e" satisfied condition "success or failure" Jan 28 21:50:59.508: INFO: Trying to get logs from node jerma-node pod pod-configmaps-37fc535e-f450-459e-b3a0-0d2f477b4f2e container configmap-volume-test: STEP: delete the pod Jan 28 21:50:59.726: INFO: Waiting for pod pod-configmaps-37fc535e-f450-459e-b3a0-0d2f477b4f2e to disappear Jan 28 21:50:59.743: INFO: Pod pod-configmaps-37fc535e-f450-459e-b3a0-0d2f477b4f2e no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:50:59.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8402" for this suite. • [SLOW TEST:8.406 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":76,"skipped":1246,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:50:59.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jan 28 21:50:59.867: INFO: Waiting up to 5m0s for pod "downwardapi-volume-64a6df12-998d-4f4f-b33b-dab4b6aee2cd" in namespace "downward-api-6573" to be "success or failure" Jan 28 21:50:59.877: INFO: Pod "downwardapi-volume-64a6df12-998d-4f4f-b33b-dab4b6aee2cd": Phase="Pending", Reason="", readiness=false. Elapsed: 9.633843ms Jan 28 21:51:01.886: INFO: Pod "downwardapi-volume-64a6df12-998d-4f4f-b33b-dab4b6aee2cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01824534s Jan 28 21:51:03.902: INFO: Pod "downwardapi-volume-64a6df12-998d-4f4f-b33b-dab4b6aee2cd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034706347s Jan 28 21:51:05.912: INFO: Pod "downwardapi-volume-64a6df12-998d-4f4f-b33b-dab4b6aee2cd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044000995s Jan 28 21:51:07.920: INFO: Pod "downwardapi-volume-64a6df12-998d-4f4f-b33b-dab4b6aee2cd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.052349807s Jan 28 21:51:09.926: INFO: Pod "downwardapi-volume-64a6df12-998d-4f4f-b33b-dab4b6aee2cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.058729934s STEP: Saw pod success Jan 28 21:51:09.927: INFO: Pod "downwardapi-volume-64a6df12-998d-4f4f-b33b-dab4b6aee2cd" satisfied condition "success or failure" Jan 28 21:51:09.929: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-64a6df12-998d-4f4f-b33b-dab4b6aee2cd container client-container: STEP: delete the pod Jan 28 21:51:09.979: INFO: Waiting for pod downwardapi-volume-64a6df12-998d-4f4f-b33b-dab4b6aee2cd to disappear Jan 28 21:51:09.984: INFO: Pod downwardapi-volume-64a6df12-998d-4f4f-b33b-dab4b6aee2cd no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:51:09.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6573" for this suite. • [SLOW TEST:10.240 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":77,"skipped":1254,"failed":0} SS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:51:10.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0128 21:51:40.432937 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 28 21:51:40.433: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:51:40.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-536" for this suite. • [SLOW TEST:30.452 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":78,"skipped":1256,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:51:40.455: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-6677 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 28 21:51:40.621: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 28 21:52:18.941: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.2 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6677 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 28 21:52:18.942: INFO: >>> kubeConfig: /root/.kube/config I0128 21:52:19.010135 8 log.go:172] (0xc004a5e2c0) (0xc0020e2e60) Create stream I0128 21:52:19.010334 8 log.go:172] (0xc004a5e2c0) (0xc0020e2e60) Stream added, broadcasting: 1 I0128 21:52:19.016504 8 log.go:172] (0xc004a5e2c0) Reply frame received for 1 I0128 21:52:19.016603 8 log.go:172] (0xc004a5e2c0) (0xc0029d2aa0) Create stream I0128 21:52:19.016614 8 log.go:172] (0xc004a5e2c0) (0xc0029d2aa0) Stream added, broadcasting: 3 I0128 21:52:19.017727 8 log.go:172] (0xc004a5e2c0) Reply frame received for 3 I0128 21:52:19.017749 8 log.go:172] (0xc004a5e2c0) (0xc0020e30e0) Create stream I0128 21:52:19.017762 8 log.go:172] (0xc004a5e2c0) (0xc0020e30e0) Stream added, broadcasting: 5 I0128 21:52:19.019342 8 log.go:172] (0xc004a5e2c0) Reply frame received for 5 I0128 21:52:20.099586 8 log.go:172] (0xc004a5e2c0) Data frame received for 3 I0128 21:52:20.099659 8 log.go:172] (0xc0029d2aa0) (3) Data frame handling I0128 21:52:20.099689 8 log.go:172] (0xc0029d2aa0) (3) Data frame sent I0128 21:52:20.192831 8 log.go:172] (0xc004a5e2c0) Data frame received for 1 I0128 21:52:20.193249 8 log.go:172] (0xc004a5e2c0) (0xc0029d2aa0) Stream removed, broadcasting: 3 I0128 21:52:20.193440 8 log.go:172] (0xc004a5e2c0) (0xc0020e30e0) Stream removed, broadcasting: 5 I0128 21:52:20.193505 8 log.go:172] (0xc0020e2e60) (1) Data frame handling I0128 21:52:20.193546 8 log.go:172] (0xc0020e2e60) (1) Data frame sent I0128 21:52:20.193563 8 log.go:172] (0xc004a5e2c0) (0xc0020e2e60) Stream removed, broadcasting: 1 I0128 21:52:20.193603 8 log.go:172] (0xc004a5e2c0) Go away received I0128 21:52:20.193948 8 log.go:172] (0xc004a5e2c0) (0xc0020e2e60) Stream removed, broadcasting: 1 I0128 21:52:20.193972 8 log.go:172] (0xc004a5e2c0) (0xc0029d2aa0) Stream removed, broadcasting: 3 I0128 21:52:20.193989 8 log.go:172] (0xc004a5e2c0) (0xc0020e30e0) Stream removed, broadcasting: 5 Jan 28 21:52:20.194: INFO: Found all expected endpoints: [netserver-0] Jan 28 21:52:20.201: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.5 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6677 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 28 21:52:20.201: INFO: >>> kubeConfig: /root/.kube/config I0128 21:52:20.258936 8 log.go:172] (0xc004a5e9a0) (0xc0020e3b80) Create stream I0128 21:52:20.259233 8 log.go:172] (0xc004a5e9a0) (0xc0020e3b80) Stream added, broadcasting: 1 I0128 21:52:20.265106 8 log.go:172] (0xc004a5e9a0) Reply frame received for 1 I0128 21:52:20.265167 8 log.go:172] (0xc004a5e9a0) (0xc001778aa0) Create stream I0128 21:52:20.265179 8 log.go:172] (0xc004a5e9a0) (0xc001778aa0) Stream added, broadcasting: 3 I0128 21:52:20.266841 8 log.go:172] (0xc004a5e9a0) Reply frame received for 3 I0128 21:52:20.266869 8 log.go:172] (0xc004a5e9a0) (0xc0020e3c20) Create stream I0128 21:52:20.266878 8 log.go:172] (0xc004a5e9a0) (0xc0020e3c20) Stream added, broadcasting: 5 I0128 21:52:20.268431 8 log.go:172] (0xc004a5e9a0) Reply frame received for 5 I0128 21:52:21.371604 8 log.go:172] (0xc004a5e9a0) Data frame received for 3 I0128 21:52:21.371709 8 log.go:172] (0xc001778aa0) (3) Data frame handling I0128 21:52:21.371762 8 log.go:172] (0xc001778aa0) (3) Data frame sent I0128 21:52:21.475479 8 log.go:172] (0xc004a5e9a0) Data frame received for 1 I0128 21:52:21.475640 8 log.go:172] (0xc004a5e9a0) (0xc001778aa0) Stream removed, broadcasting: 3 I0128 21:52:21.475727 8 log.go:172] (0xc0020e3b80) (1) Data frame handling I0128 21:52:21.475775 8 log.go:172] (0xc0020e3b80) (1) Data frame sent I0128 21:52:21.475809 8 log.go:172] (0xc004a5e9a0) (0xc0020e3c20) Stream removed, broadcasting: 5 I0128 21:52:21.475862 8 log.go:172] (0xc004a5e9a0) (0xc0020e3b80) Stream removed, broadcasting: 1 I0128 21:52:21.475900 8 log.go:172] (0xc004a5e9a0) Go away received I0128 21:52:21.476256 8 log.go:172] (0xc004a5e9a0) (0xc0020e3b80) Stream removed, broadcasting: 1 I0128 21:52:21.476278 8 log.go:172] (0xc004a5e9a0) (0xc001778aa0) Stream removed, broadcasting: 3 I0128 21:52:21.476294 8 log.go:172] (0xc004a5e9a0) (0xc0020e3c20) Stream removed, broadcasting: 5 Jan 28 21:52:21.476: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:52:21.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6677" for this suite. • [SLOW TEST:41.039 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":79,"skipped":1285,"failed":0} [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:52:21.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium Jan 28 21:52:21.772: INFO: Waiting up to 5m0s for pod "pod-d35f3d29-d618-4ae1-b600-7edc0188f351" in namespace "emptydir-2110" to be "success or failure" Jan 28 21:52:21.778: INFO: Pod "pod-d35f3d29-d618-4ae1-b600-7edc0188f351": Phase="Pending", Reason="", readiness=false. Elapsed: 6.262216ms Jan 28 21:52:23.800: INFO: Pod "pod-d35f3d29-d618-4ae1-b600-7edc0188f351": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028028048s Jan 28 21:52:25.807: INFO: Pod "pod-d35f3d29-d618-4ae1-b600-7edc0188f351": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035138974s Jan 28 21:52:27.834: INFO: Pod "pod-d35f3d29-d618-4ae1-b600-7edc0188f351": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062126242s Jan 28 21:52:29.841: INFO: Pod "pod-d35f3d29-d618-4ae1-b600-7edc0188f351": Phase="Pending", Reason="", readiness=false. Elapsed: 8.069445496s Jan 28 21:52:31.866: INFO: Pod "pod-d35f3d29-d618-4ae1-b600-7edc0188f351": Phase="Pending", Reason="", readiness=false. Elapsed: 10.093691937s Jan 28 21:52:33.878: INFO: Pod "pod-d35f3d29-d618-4ae1-b600-7edc0188f351": Phase="Pending", Reason="", readiness=false. Elapsed: 12.106458524s Jan 28 21:52:35.889: INFO: Pod "pod-d35f3d29-d618-4ae1-b600-7edc0188f351": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.116665802s STEP: Saw pod success Jan 28 21:52:35.889: INFO: Pod "pod-d35f3d29-d618-4ae1-b600-7edc0188f351" satisfied condition "success or failure" Jan 28 21:52:35.918: INFO: Trying to get logs from node jerma-node pod pod-d35f3d29-d618-4ae1-b600-7edc0188f351 container test-container: STEP: delete the pod Jan 28 21:52:35.976: INFO: Waiting for pod pod-d35f3d29-d618-4ae1-b600-7edc0188f351 to disappear Jan 28 21:52:35.996: INFO: Pod pod-d35f3d29-d618-4ae1-b600-7edc0188f351 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:52:35.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2110" for this suite. • [SLOW TEST:14.508 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":80,"skipped":1285,"failed":0} SSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:52:36.004: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:52:48.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2270" for this suite. • [SLOW TEST:12.134 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":81,"skipped":1288,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:52:48.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-8704bcea-7461-49d8-9463-91f444cffd3c STEP: Creating a pod to test consume configMaps Jan 28 21:52:48.444: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c9daecb2-f099-49d1-963f-33e1a6b910f6" in namespace "projected-79" to be "success or failure" Jan 28 21:52:48.492: INFO: Pod "pod-projected-configmaps-c9daecb2-f099-49d1-963f-33e1a6b910f6": Phase="Pending", Reason="", readiness=false. Elapsed: 47.829439ms Jan 28 21:52:50.531: INFO: Pod "pod-projected-configmaps-c9daecb2-f099-49d1-963f-33e1a6b910f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086829243s Jan 28 21:52:52.576: INFO: Pod "pod-projected-configmaps-c9daecb2-f099-49d1-963f-33e1a6b910f6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.13153197s Jan 28 21:52:54.709: INFO: Pod "pod-projected-configmaps-c9daecb2-f099-49d1-963f-33e1a6b910f6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.264776295s Jan 28 21:52:56.714: INFO: Pod "pod-projected-configmaps-c9daecb2-f099-49d1-963f-33e1a6b910f6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.269986012s Jan 28 21:52:58.745: INFO: Pod "pod-projected-configmaps-c9daecb2-f099-49d1-963f-33e1a6b910f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.300581207s STEP: Saw pod success Jan 28 21:52:58.745: INFO: Pod "pod-projected-configmaps-c9daecb2-f099-49d1-963f-33e1a6b910f6" satisfied condition "success or failure" Jan 28 21:52:58.750: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-c9daecb2-f099-49d1-963f-33e1a6b910f6 container projected-configmap-volume-test: STEP: delete the pod Jan 28 21:52:59.026: INFO: Waiting for pod pod-projected-configmaps-c9daecb2-f099-49d1-963f-33e1a6b910f6 to disappear Jan 28 21:52:59.034: INFO: Pod pod-projected-configmaps-c9daecb2-f099-49d1-963f-33e1a6b910f6 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:52:59.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-79" for this suite. • [SLOW TEST:10.917 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":82,"skipped":1326,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:52:59.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating all guestbook components Jan 28 21:52:59.185: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend Jan 28 21:52:59.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7198' Jan 28 21:52:59.745: INFO: stderr: "" Jan 28 21:52:59.745: INFO: stdout: "service/agnhost-slave created\n" Jan 28 21:52:59.747: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend Jan 28 21:52:59.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7198' Jan 28 21:53:00.222: INFO: stderr: "" Jan 28 21:53:00.222: INFO: stdout: "service/agnhost-master created\n" Jan 28 21:53:00.222: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Jan 28 21:53:00.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7198' Jan 28 21:53:00.807: INFO: stderr: "" Jan 28 21:53:00.807: INFO: stdout: "service/frontend created\n" Jan 28 21:53:00.808: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Jan 28 21:53:00.808: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7198' Jan 28 21:53:01.322: INFO: stderr: "" Jan 28 21:53:01.322: INFO: stdout: "deployment.apps/frontend created\n" Jan 28 21:53:01.323: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jan 28 21:53:01.323: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7198' Jan 28 21:53:02.043: INFO: stderr: "" Jan 28 21:53:02.043: INFO: stdout: "deployment.apps/agnhost-master created\n" Jan 28 21:53:02.045: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jan 28 21:53:02.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7198' Jan 28 21:53:02.741: INFO: stderr: "" Jan 28 21:53:02.741: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app Jan 28 21:53:02.741: INFO: Waiting for all frontend pods to be Running. Jan 28 21:53:22.793: INFO: Waiting for frontend to serve content. Jan 28 21:53:22.824: INFO: Trying to add a new entry to the guestbook. Jan 28 21:53:22.848: INFO: Verifying that added entry can be retrieved. Jan 28 21:53:22.876: INFO: Failed to get response from guestbook. err: , response: {"data":""} STEP: using delete to clean up resources Jan 28 21:53:27.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7198' Jan 28 21:53:28.156: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 28 21:53:28.156: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources Jan 28 21:53:28.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7198' Jan 28 21:53:28.417: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 28 21:53:28.418: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Jan 28 21:53:28.418: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7198' Jan 28 21:53:28.661: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 28 21:53:28.661: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Jan 28 21:53:28.662: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7198' Jan 28 21:53:28.791: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 28 21:53:28.791: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Jan 28 21:53:28.792: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7198' Jan 28 21:53:28.943: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 28 21:53:28.943: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Jan 28 21:53:28.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7198' Jan 28 21:53:29.130: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 28 21:53:29.130: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:53:29.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7198" for this suite. • [SLOW TEST:30.211 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:385 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":278,"completed":83,"skipped":1348,"failed":0} SSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:53:29.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-1321 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 28 21:53:29.507: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 28 21:54:11.862: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-1321 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 28 21:54:11.862: INFO: >>> kubeConfig: /root/.kube/config I0128 21:54:11.939101 8 log.go:172] (0xc002a2e160) (0xc0027fb720) Create stream I0128 21:54:11.939321 8 log.go:172] (0xc002a2e160) (0xc0027fb720) Stream added, broadcasting: 1 I0128 21:54:11.942820 8 log.go:172] (0xc002a2e160) Reply frame received for 1 I0128 21:54:11.942881 8 log.go:172] (0xc002a2e160) (0xc00137a6e0) Create stream I0128 21:54:11.942890 8 log.go:172] (0xc002a2e160) (0xc00137a6e0) Stream added, broadcasting: 3 I0128 21:54:11.944456 8 log.go:172] (0xc002a2e160) Reply frame received for 3 I0128 21:54:11.944501 8 log.go:172] (0xc002a2e160) (0xc0027fb9a0) Create stream I0128 21:54:11.944510 8 log.go:172] (0xc002a2e160) (0xc0027fb9a0) Stream added, broadcasting: 5 I0128 21:54:11.946263 8 log.go:172] (0xc002a2e160) Reply frame received for 5 I0128 21:54:12.053403 8 log.go:172] (0xc002a2e160) Data frame received for 3 I0128 21:54:12.053681 8 log.go:172] (0xc00137a6e0) (3) Data frame handling I0128 21:54:12.053723 8 log.go:172] (0xc00137a6e0) (3) Data frame sent I0128 21:54:12.139482 8 log.go:172] (0xc002a2e160) Data frame received for 1 I0128 21:54:12.139550 8 log.go:172] (0xc0027fb720) (1) Data frame handling I0128 21:54:12.139567 8 log.go:172] (0xc0027fb720) (1) Data frame sent I0128 21:54:12.141651 8 log.go:172] (0xc002a2e160) (0xc0027fb720) Stream removed, broadcasting: 1 I0128 21:54:12.141964 8 log.go:172] (0xc002a2e160) (0xc00137a6e0) Stream removed, broadcasting: 3 I0128 21:54:12.142116 8 log.go:172] (0xc002a2e160) (0xc0027fb9a0) Stream removed, broadcasting: 5 I0128 21:54:12.142222 8 log.go:172] (0xc002a2e160) (0xc0027fb720) Stream removed, broadcasting: 1 I0128 21:54:12.142238 8 log.go:172] (0xc002a2e160) (0xc00137a6e0) Stream removed, broadcasting: 3 I0128 21:54:12.142249 8 log.go:172] (0xc002a2e160) (0xc0027fb9a0) Stream removed, broadcasting: 5 Jan 28 21:54:12.142: INFO: Found all expected endpoints: [netserver-0] I0128 21:54:12.142762 8 log.go:172] (0xc002a2e160) Go away received Jan 28 21:54:12.150: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-1321 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 28 21:54:12.150: INFO: >>> kubeConfig: /root/.kube/config I0128 21:54:12.211078 8 log.go:172] (0xc0028f2a50) (0xc001c3c820) Create stream I0128 21:54:12.211400 8 log.go:172] (0xc0028f2a50) (0xc001c3c820) Stream added, broadcasting: 1 I0128 21:54:12.219225 8 log.go:172] (0xc0028f2a50) Reply frame received for 1 I0128 21:54:12.219321 8 log.go:172] (0xc0028f2a50) (0xc0017780a0) Create stream I0128 21:54:12.219345 8 log.go:172] (0xc0028f2a50) (0xc0017780a0) Stream added, broadcasting: 3 I0128 21:54:12.221467 8 log.go:172] (0xc0028f2a50) Reply frame received for 3 I0128 21:54:12.221493 8 log.go:172] (0xc0028f2a50) (0xc001c3c960) Create stream I0128 21:54:12.221503 8 log.go:172] (0xc0028f2a50) (0xc001c3c960) Stream added, broadcasting: 5 I0128 21:54:12.222889 8 log.go:172] (0xc0028f2a50) Reply frame received for 5 I0128 21:54:12.323251 8 log.go:172] (0xc0028f2a50) Data frame received for 3 I0128 21:54:12.323742 8 log.go:172] (0xc0017780a0) (3) Data frame handling I0128 21:54:12.323799 8 log.go:172] (0xc0017780a0) (3) Data frame sent I0128 21:54:12.424068 8 log.go:172] (0xc0028f2a50) (0xc0017780a0) Stream removed, broadcasting: 3 I0128 21:54:12.424328 8 log.go:172] (0xc0028f2a50) Data frame received for 1 I0128 21:54:12.424363 8 log.go:172] (0xc001c3c820) (1) Data frame handling I0128 21:54:12.424401 8 log.go:172] (0xc001c3c820) (1) Data frame sent I0128 21:54:12.424425 8 log.go:172] (0xc0028f2a50) (0xc001c3c820) Stream removed, broadcasting: 1 I0128 21:54:12.424689 8 log.go:172] (0xc0028f2a50) (0xc001c3c960) Stream removed, broadcasting: 5 I0128 21:54:12.424864 8 log.go:172] (0xc0028f2a50) Go away received I0128 21:54:12.424929 8 log.go:172] (0xc0028f2a50) (0xc001c3c820) Stream removed, broadcasting: 1 I0128 21:54:12.424966 8 log.go:172] (0xc0028f2a50) (0xc0017780a0) Stream removed, broadcasting: 3 I0128 21:54:12.424976 8 log.go:172] (0xc0028f2a50) (0xc001c3c960) Stream removed, broadcasting: 5 Jan 28 21:54:12.425: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:54:12.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1321" for this suite. • [SLOW TEST:43.172 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":84,"skipped":1351,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:54:12.443: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-d3521285-c0c4-40fd-93a9-03d6dde5bcdc STEP: Creating a pod to test consume configMaps Jan 28 21:54:12.578: INFO: Waiting up to 5m0s for pod "pod-configmaps-5f3e5042-359e-4f3d-b3d4-4198d6d95b2a" in namespace "configmap-66" to be "success or failure" Jan 28 21:54:12.582: INFO: Pod "pod-configmaps-5f3e5042-359e-4f3d-b3d4-4198d6d95b2a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.693589ms Jan 28 21:54:14.936: INFO: Pod "pod-configmaps-5f3e5042-359e-4f3d-b3d4-4198d6d95b2a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.35843792s Jan 28 21:54:16.943: INFO: Pod "pod-configmaps-5f3e5042-359e-4f3d-b3d4-4198d6d95b2a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.364922181s Jan 28 21:54:19.003: INFO: Pod "pod-configmaps-5f3e5042-359e-4f3d-b3d4-4198d6d95b2a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.425067176s Jan 28 21:54:21.013: INFO: Pod "pod-configmaps-5f3e5042-359e-4f3d-b3d4-4198d6d95b2a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.435165018s Jan 28 21:54:23.019: INFO: Pod "pod-configmaps-5f3e5042-359e-4f3d-b3d4-4198d6d95b2a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.441642918s Jan 28 21:54:25.034: INFO: Pod "pod-configmaps-5f3e5042-359e-4f3d-b3d4-4198d6d95b2a": Phase="Pending", Reason="", readiness=false. Elapsed: 12.456511854s Jan 28 21:54:27.064: INFO: Pod "pod-configmaps-5f3e5042-359e-4f3d-b3d4-4198d6d95b2a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.48621957s STEP: Saw pod success Jan 28 21:54:27.065: INFO: Pod "pod-configmaps-5f3e5042-359e-4f3d-b3d4-4198d6d95b2a" satisfied condition "success or failure" Jan 28 21:54:27.070: INFO: Trying to get logs from node jerma-node pod pod-configmaps-5f3e5042-359e-4f3d-b3d4-4198d6d95b2a container configmap-volume-test: STEP: delete the pod Jan 28 21:54:27.095: INFO: Waiting for pod pod-configmaps-5f3e5042-359e-4f3d-b3d4-4198d6d95b2a to disappear Jan 28 21:54:27.104: INFO: Pod pod-configmaps-5f3e5042-359e-4f3d-b3d4-4198d6d95b2a no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:54:27.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-66" for this suite. • [SLOW TEST:14.712 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":85,"skipped":1363,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:54:27.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-ff437398-2a76-4747-a2c4-2441ffa93cd8 STEP: Creating a pod to test consume secrets Jan 28 21:54:27.343: INFO: Waiting up to 5m0s for pod "pod-secrets-3ccc5ee3-2cc7-483c-bc57-81757ab2fc46" in namespace "secrets-9808" to be "success or failure" Jan 28 21:54:27.368: INFO: Pod "pod-secrets-3ccc5ee3-2cc7-483c-bc57-81757ab2fc46": Phase="Pending", Reason="", readiness=false. Elapsed: 25.179019ms Jan 28 21:54:29.376: INFO: Pod "pod-secrets-3ccc5ee3-2cc7-483c-bc57-81757ab2fc46": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032440486s Jan 28 21:54:31.382: INFO: Pod "pod-secrets-3ccc5ee3-2cc7-483c-bc57-81757ab2fc46": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038531821s Jan 28 21:54:33.390: INFO: Pod "pod-secrets-3ccc5ee3-2cc7-483c-bc57-81757ab2fc46": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046907431s Jan 28 21:54:35.396: INFO: Pod "pod-secrets-3ccc5ee3-2cc7-483c-bc57-81757ab2fc46": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.052341368s STEP: Saw pod success Jan 28 21:54:35.396: INFO: Pod "pod-secrets-3ccc5ee3-2cc7-483c-bc57-81757ab2fc46" satisfied condition "success or failure" Jan 28 21:54:35.398: INFO: Trying to get logs from node jerma-node pod pod-secrets-3ccc5ee3-2cc7-483c-bc57-81757ab2fc46 container secret-volume-test: STEP: delete the pod Jan 28 21:54:35.430: INFO: Waiting for pod pod-secrets-3ccc5ee3-2cc7-483c-bc57-81757ab2fc46 to disappear Jan 28 21:54:35.482: INFO: Pod pod-secrets-3ccc5ee3-2cc7-483c-bc57-81757ab2fc46 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:54:35.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9808" for this suite. • [SLOW TEST:8.335 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":86,"skipped":1372,"failed":0} SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:54:35.491: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jan 28 21:54:35.701: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a0a3c451-c481-4188-bfb5-6979ba5192a8" in namespace "projected-5568" to be "success or failure" Jan 28 21:54:35.712: INFO: Pod "downwardapi-volume-a0a3c451-c481-4188-bfb5-6979ba5192a8": Phase="Pending", Reason="", readiness=false. Elapsed: 11.229217ms Jan 28 21:54:37.772: INFO: Pod "downwardapi-volume-a0a3c451-c481-4188-bfb5-6979ba5192a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070800972s Jan 28 21:54:39.780: INFO: Pod "downwardapi-volume-a0a3c451-c481-4188-bfb5-6979ba5192a8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.079111297s Jan 28 21:54:41.793: INFO: Pod "downwardapi-volume-a0a3c451-c481-4188-bfb5-6979ba5192a8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.092451333s Jan 28 21:54:43.804: INFO: Pod "downwardapi-volume-a0a3c451-c481-4188-bfb5-6979ba5192a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.102634689s STEP: Saw pod success Jan 28 21:54:43.804: INFO: Pod "downwardapi-volume-a0a3c451-c481-4188-bfb5-6979ba5192a8" satisfied condition "success or failure" Jan 28 21:54:43.811: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-a0a3c451-c481-4188-bfb5-6979ba5192a8 container client-container: STEP: delete the pod Jan 28 21:54:43.864: INFO: Waiting for pod downwardapi-volume-a0a3c451-c481-4188-bfb5-6979ba5192a8 to disappear Jan 28 21:54:43.915: INFO: Pod downwardapi-volume-a0a3c451-c481-4188-bfb5-6979ba5192a8 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:54:43.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5568" for this suite. • [SLOW TEST:8.445 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":87,"skipped":1377,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:54:43.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Jan 28 21:54:54.177: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-7620 PodName:pod-sharedvolume-bf562424-657d-43ff-bfd8-afa84bde13b2 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 28 21:54:54.177: INFO: >>> kubeConfig: /root/.kube/config I0128 21:54:54.232736 8 log.go:172] (0xc0018a6370) (0xc0020e2280) Create stream I0128 21:54:54.232817 8 log.go:172] (0xc0018a6370) (0xc0020e2280) Stream added, broadcasting: 1 I0128 21:54:54.236537 8 log.go:172] (0xc0018a6370) Reply frame received for 1 I0128 21:54:54.236578 8 log.go:172] (0xc0018a6370) (0xc002979b80) Create stream I0128 21:54:54.236589 8 log.go:172] (0xc0018a6370) (0xc002979b80) Stream added, broadcasting: 3 I0128 21:54:54.237971 8 log.go:172] (0xc0018a6370) Reply frame received for 3 I0128 21:54:54.237996 8 log.go:172] (0xc0018a6370) (0xc0020e25a0) Create stream I0128 21:54:54.238006 8 log.go:172] (0xc0018a6370) (0xc0020e25a0) Stream added, broadcasting: 5 I0128 21:54:54.240783 8 log.go:172] (0xc0018a6370) Reply frame received for 5 I0128 21:54:54.324942 8 log.go:172] (0xc0018a6370) Data frame received for 3 I0128 21:54:54.325033 8 log.go:172] (0xc002979b80) (3) Data frame handling I0128 21:54:54.325478 8 log.go:172] (0xc002979b80) (3) Data frame sent I0128 21:54:54.448657 8 log.go:172] (0xc0018a6370) (0xc002979b80) Stream removed, broadcasting: 3 I0128 21:54:54.449004 8 log.go:172] (0xc0018a6370) Data frame received for 1 I0128 21:54:54.449028 8 log.go:172] (0xc0020e2280) (1) Data frame handling I0128 21:54:54.449050 8 log.go:172] (0xc0020e2280) (1) Data frame sent I0128 21:54:54.449057 8 log.go:172] (0xc0018a6370) (0xc0020e2280) Stream removed, broadcasting: 1 I0128 21:54:54.449338 8 log.go:172] (0xc0018a6370) (0xc0020e25a0) Stream removed, broadcasting: 5 I0128 21:54:54.449384 8 log.go:172] (0xc0018a6370) (0xc0020e2280) Stream removed, broadcasting: 1 I0128 21:54:54.449394 8 log.go:172] (0xc0018a6370) (0xc002979b80) Stream removed, broadcasting: 3 I0128 21:54:54.449403 8 log.go:172] (0xc0018a6370) (0xc0020e25a0) Stream removed, broadcasting: 5 Jan 28 21:54:54.449: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:54:54.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0128 21:54:54.450631 8 log.go:172] (0xc0018a6370) Go away received STEP: Destroying namespace "emptydir-7620" for this suite. • [SLOW TEST:10.541 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":88,"skipped":1387,"failed":0} [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:54:54.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jan 28 21:54:54.607: INFO: Waiting up to 5m0s for pod "downwardapi-volume-99d3f2e8-8ba1-4029-95c1-93e20b8ba450" in namespace "projected-6816" to be "success or failure" Jan 28 21:54:54.684: INFO: Pod "downwardapi-volume-99d3f2e8-8ba1-4029-95c1-93e20b8ba450": Phase="Pending", Reason="", readiness=false. Elapsed: 76.834463ms Jan 28 21:54:56.691: INFO: Pod "downwardapi-volume-99d3f2e8-8ba1-4029-95c1-93e20b8ba450": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083680784s Jan 28 21:54:58.704: INFO: Pod "downwardapi-volume-99d3f2e8-8ba1-4029-95c1-93e20b8ba450": Phase="Pending", Reason="", readiness=false. Elapsed: 4.096521408s Jan 28 21:55:00.710: INFO: Pod "downwardapi-volume-99d3f2e8-8ba1-4029-95c1-93e20b8ba450": Phase="Pending", Reason="", readiness=false. Elapsed: 6.102404405s Jan 28 21:55:02.718: INFO: Pod "downwardapi-volume-99d3f2e8-8ba1-4029-95c1-93e20b8ba450": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.111068914s STEP: Saw pod success Jan 28 21:55:02.718: INFO: Pod "downwardapi-volume-99d3f2e8-8ba1-4029-95c1-93e20b8ba450" satisfied condition "success or failure" Jan 28 21:55:02.722: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-99d3f2e8-8ba1-4029-95c1-93e20b8ba450 container client-container: STEP: delete the pod Jan 28 21:55:02.906: INFO: Waiting for pod downwardapi-volume-99d3f2e8-8ba1-4029-95c1-93e20b8ba450 to disappear Jan 28 21:55:02.914: INFO: Pod downwardapi-volume-99d3f2e8-8ba1-4029-95c1-93e20b8ba450 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:55:02.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6816" for this suite. • [SLOW TEST:8.447 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":89,"skipped":1387,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:55:02.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Jan 28 21:55:03.044: INFO: Waiting up to 5m0s for pod "pod-1c3aa50c-1215-439c-ba2f-5a9bab64ce10" in namespace "emptydir-7205" to be "success or failure" Jan 28 21:55:03.048: INFO: Pod "pod-1c3aa50c-1215-439c-ba2f-5a9bab64ce10": Phase="Pending", Reason="", readiness=false. Elapsed: 4.223585ms Jan 28 21:55:05.059: INFO: Pod "pod-1c3aa50c-1215-439c-ba2f-5a9bab64ce10": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015083851s Jan 28 21:55:07.141: INFO: Pod "pod-1c3aa50c-1215-439c-ba2f-5a9bab64ce10": Phase="Pending", Reason="", readiness=false. Elapsed: 4.096852865s Jan 28 21:55:09.148: INFO: Pod "pod-1c3aa50c-1215-439c-ba2f-5a9bab64ce10": Phase="Pending", Reason="", readiness=false. Elapsed: 6.104400789s Jan 28 21:55:11.154: INFO: Pod "pod-1c3aa50c-1215-439c-ba2f-5a9bab64ce10": Phase="Pending", Reason="", readiness=false. Elapsed: 8.110369175s Jan 28 21:55:13.163: INFO: Pod "pod-1c3aa50c-1215-439c-ba2f-5a9bab64ce10": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.118938544s STEP: Saw pod success Jan 28 21:55:13.163: INFO: Pod "pod-1c3aa50c-1215-439c-ba2f-5a9bab64ce10" satisfied condition "success or failure" Jan 28 21:55:13.169: INFO: Trying to get logs from node jerma-node pod pod-1c3aa50c-1215-439c-ba2f-5a9bab64ce10 container test-container: STEP: delete the pod Jan 28 21:55:13.238: INFO: Waiting for pod pod-1c3aa50c-1215-439c-ba2f-5a9bab64ce10 to disappear Jan 28 21:55:13.245: INFO: Pod pod-1c3aa50c-1215-439c-ba2f-5a9bab64ce10 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:55:13.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7205" for this suite. • [SLOW TEST:10.332 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":90,"skipped":1391,"failed":0} SSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:55:13.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 28 21:55:13.560: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-5d03e025-a6bd-4d0e-a471-ff595acbc63e" in namespace "security-context-test-278" to be "success or failure" Jan 28 21:55:13.579: INFO: Pod "alpine-nnp-false-5d03e025-a6bd-4d0e-a471-ff595acbc63e": Phase="Pending", Reason="", readiness=false. Elapsed: 18.415991ms Jan 28 21:55:15.585: INFO: Pod "alpine-nnp-false-5d03e025-a6bd-4d0e-a471-ff595acbc63e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025154641s Jan 28 21:55:17.593: INFO: Pod "alpine-nnp-false-5d03e025-a6bd-4d0e-a471-ff595acbc63e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033251834s Jan 28 21:55:19.600: INFO: Pod "alpine-nnp-false-5d03e025-a6bd-4d0e-a471-ff595acbc63e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040210733s Jan 28 21:55:21.605: INFO: Pod "alpine-nnp-false-5d03e025-a6bd-4d0e-a471-ff595acbc63e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.04516645s Jan 28 21:55:21.605: INFO: Pod "alpine-nnp-false-5d03e025-a6bd-4d0e-a471-ff595acbc63e" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:55:21.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-278" for this suite. • [SLOW TEST:8.371 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when creating containers with AllowPrivilegeEscalation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:289 should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":91,"skipped":1394,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:55:21.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:55:32.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1715" for this suite. • [SLOW TEST:11.262 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":92,"skipped":1452,"failed":0} SS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:55:32.895: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:55:40.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3459" for this suite. • [SLOW TEST:7.185 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":93,"skipped":1454,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:55:40.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 28 21:55:40.652: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 28 21:55:42.691: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845340, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845340, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845340, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845340, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 28 21:55:44.698: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845340, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845340, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845340, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845340, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 28 21:55:46.707: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845340, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845340, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845340, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845340, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 28 21:55:48.698: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845340, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845340, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845340, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845340, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 28 21:55:51.846: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 28 21:55:51.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6513-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:55:53.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3927" for this suite. STEP: Destroying namespace "webhook-3927-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:13.316 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":94,"skipped":1461,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:55:53.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service nodeport-test with type=NodePort in namespace services-9459 STEP: creating replication controller nodeport-test in namespace services-9459 I0128 21:55:53.963331 8 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-9459, replica count: 2 I0128 21:55:57.014355 8 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0128 21:56:00.014779 8 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0128 21:56:03.015065 8 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0128 21:56:06.015893 8 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 28 21:56:06.016: INFO: Creating new exec pod Jan 28 21:56:15.041: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9459 execpodxl6fm -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Jan 28 21:56:15.454: INFO: stderr: "I0128 21:56:15.295243 2393 log.go:172] (0xc000b12370) (0xc000a58320) Create stream\nI0128 21:56:15.295571 2393 log.go:172] (0xc000b12370) (0xc000a58320) Stream added, broadcasting: 1\nI0128 21:56:15.300470 2393 log.go:172] (0xc000b12370) Reply frame received for 1\nI0128 21:56:15.300580 2393 log.go:172] (0xc000b12370) (0xc000aae280) Create stream\nI0128 21:56:15.300604 2393 log.go:172] (0xc000b12370) (0xc000aae280) Stream added, broadcasting: 3\nI0128 21:56:15.301961 2393 log.go:172] (0xc000b12370) Reply frame received for 3\nI0128 21:56:15.301989 2393 log.go:172] (0xc000b12370) (0xc000aae320) Create stream\nI0128 21:56:15.301996 2393 log.go:172] (0xc000b12370) (0xc000aae320) Stream added, broadcasting: 5\nI0128 21:56:15.303504 2393 log.go:172] (0xc000b12370) Reply frame received for 5\nI0128 21:56:15.378720 2393 log.go:172] (0xc000b12370) Data frame received for 5\nI0128 21:56:15.378862 2393 log.go:172] (0xc000aae320) (5) Data frame handling\nI0128 21:56:15.378892 2393 log.go:172] (0xc000aae320) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0128 21:56:15.381093 2393 log.go:172] (0xc000b12370) Data frame received for 5\nI0128 21:56:15.381118 2393 log.go:172] (0xc000aae320) (5) Data frame handling\nI0128 21:56:15.381132 2393 log.go:172] (0xc000aae320) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0128 21:56:15.440544 2393 log.go:172] (0xc000b12370) Data frame received for 1\nI0128 21:56:15.440646 2393 log.go:172] (0xc000b12370) (0xc000aae280) Stream removed, broadcasting: 3\nI0128 21:56:15.440706 2393 log.go:172] (0xc000a58320) (1) Data frame handling\nI0128 21:56:15.440729 2393 log.go:172] (0xc000a58320) (1) Data frame sent\nI0128 21:56:15.440759 2393 log.go:172] (0xc000b12370) (0xc000a58320) Stream removed, broadcasting: 1\nI0128 21:56:15.444415 2393 log.go:172] (0xc000b12370) (0xc000aae320) Stream removed, broadcasting: 5\nI0128 21:56:15.444504 2393 log.go:172] (0xc000b12370) (0xc000a58320) Stream removed, broadcasting: 1\nI0128 21:56:15.444531 2393 log.go:172] (0xc000b12370) (0xc000aae280) Stream removed, broadcasting: 3\nI0128 21:56:15.444547 2393 log.go:172] (0xc000b12370) (0xc000aae320) Stream removed, broadcasting: 5\n" Jan 28 21:56:15.455: INFO: stdout: "" Jan 28 21:56:15.457: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9459 execpodxl6fm -- /bin/sh -x -c nc -zv -t -w 2 10.96.71.217 80' Jan 28 21:56:15.847: INFO: stderr: "I0128 21:56:15.669473 2413 log.go:172] (0xc0009eedc0) (0xc000a80320) Create stream\nI0128 21:56:15.669745 2413 log.go:172] (0xc0009eedc0) (0xc000a80320) Stream added, broadcasting: 1\nI0128 21:56:15.674081 2413 log.go:172] (0xc0009eedc0) Reply frame received for 1\nI0128 21:56:15.674153 2413 log.go:172] (0xc0009eedc0) (0xc000a68140) Create stream\nI0128 21:56:15.674172 2413 log.go:172] (0xc0009eedc0) (0xc000a68140) Stream added, broadcasting: 3\nI0128 21:56:15.675669 2413 log.go:172] (0xc0009eedc0) Reply frame received for 3\nI0128 21:56:15.675697 2413 log.go:172] (0xc0009eedc0) (0xc000a681e0) Create stream\nI0128 21:56:15.675706 2413 log.go:172] (0xc0009eedc0) (0xc000a681e0) Stream added, broadcasting: 5\nI0128 21:56:15.676827 2413 log.go:172] (0xc0009eedc0) Reply frame received for 5\nI0128 21:56:15.751026 2413 log.go:172] (0xc0009eedc0) Data frame received for 5\nI0128 21:56:15.751110 2413 log.go:172] (0xc000a681e0) (5) Data frame handling\nI0128 21:56:15.751134 2413 log.go:172] (0xc000a681e0) (5) Data frame sent\nI0128 21:56:15.751143 2413 log.go:172] (0xc0009eedc0) Data frame received for 5\nI0128 21:56:15.751153 2413 log.go:172] (0xc000a681e0) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.71.217 80\nI0128 21:56:15.751203 2413 log.go:172] (0xc000a681e0) (5) Data frame sent\nI0128 21:56:15.751210 2413 log.go:172] (0xc0009eedc0) Data frame received for 5\nI0128 21:56:15.751290 2413 log.go:172] (0xc000a681e0) (5) Data frame handling\nI0128 21:56:15.751301 2413 log.go:172] (0xc000a681e0) (5) Data frame sent\nConnection to 10.96.71.217 80 port [tcp/http] succeeded!\nI0128 21:56:15.829614 2413 log.go:172] (0xc0009eedc0) Data frame received for 1\nI0128 21:56:15.829728 2413 log.go:172] (0xc0009eedc0) (0xc000a681e0) Stream removed, broadcasting: 5\nI0128 21:56:15.829837 2413 log.go:172] (0xc000a80320) (1) Data frame handling\nI0128 21:56:15.829868 2413 log.go:172] (0xc000a80320) (1) Data frame sent\nI0128 21:56:15.829900 2413 log.go:172] (0xc0009eedc0) (0xc000a68140) Stream removed, broadcasting: 3\nI0128 21:56:15.829938 2413 log.go:172] (0xc0009eedc0) (0xc000a80320) Stream removed, broadcasting: 1\nI0128 21:56:15.829958 2413 log.go:172] (0xc0009eedc0) Go away received\nI0128 21:56:15.830896 2413 log.go:172] (0xc0009eedc0) (0xc000a80320) Stream removed, broadcasting: 1\nI0128 21:56:15.830911 2413 log.go:172] (0xc0009eedc0) (0xc000a68140) Stream removed, broadcasting: 3\nI0128 21:56:15.830917 2413 log.go:172] (0xc0009eedc0) (0xc000a681e0) Stream removed, broadcasting: 5\n" Jan 28 21:56:15.847: INFO: stdout: "" Jan 28 21:56:15.848: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9459 execpodxl6fm -- /bin/sh -x -c nc -zv -t -w 2 10.96.2.250 30929' Jan 28 21:56:16.228: INFO: stderr: "I0128 21:56:16.064163 2433 log.go:172] (0xc000b2f130) (0xc000a52320) Create stream\nI0128 21:56:16.064321 2433 log.go:172] (0xc000b2f130) (0xc000a52320) Stream added, broadcasting: 1\nI0128 21:56:16.075113 2433 log.go:172] (0xc000b2f130) Reply frame received for 1\nI0128 21:56:16.075412 2433 log.go:172] (0xc000b2f130) (0xc0009f8320) Create stream\nI0128 21:56:16.075554 2433 log.go:172] (0xc000b2f130) (0xc0009f8320) Stream added, broadcasting: 3\nI0128 21:56:16.078851 2433 log.go:172] (0xc000b2f130) Reply frame received for 3\nI0128 21:56:16.078949 2433 log.go:172] (0xc000b2f130) (0xc0004d3540) Create stream\nI0128 21:56:16.078961 2433 log.go:172] (0xc000b2f130) (0xc0004d3540) Stream added, broadcasting: 5\nI0128 21:56:16.080024 2433 log.go:172] (0xc000b2f130) Reply frame received for 5\nI0128 21:56:16.143930 2433 log.go:172] (0xc000b2f130) Data frame received for 5\nI0128 21:56:16.144053 2433 log.go:172] (0xc0004d3540) (5) Data frame handling\nI0128 21:56:16.144080 2433 log.go:172] (0xc0004d3540) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.2.250 30929\nI0128 21:56:16.147709 2433 log.go:172] (0xc000b2f130) Data frame received for 5\nI0128 21:56:16.147730 2433 log.go:172] (0xc0004d3540) (5) Data frame handling\nI0128 21:56:16.147748 2433 log.go:172] (0xc0004d3540) (5) Data frame sent\nConnection to 10.96.2.250 30929 port [tcp/30929] succeeded!\nI0128 21:56:16.215575 2433 log.go:172] (0xc000b2f130) (0xc0004d3540) Stream removed, broadcasting: 5\nI0128 21:56:16.215799 2433 log.go:172] (0xc000b2f130) Data frame received for 1\nI0128 21:56:16.215819 2433 log.go:172] (0xc000a52320) (1) Data frame handling\nI0128 21:56:16.215849 2433 log.go:172] (0xc000a52320) (1) Data frame sent\nI0128 21:56:16.215893 2433 log.go:172] (0xc000b2f130) (0xc000a52320) Stream removed, broadcasting: 1\nI0128 21:56:16.216262 2433 log.go:172] (0xc000b2f130) (0xc0009f8320) Stream removed, broadcasting: 3\nI0128 21:56:16.216344 2433 log.go:172] (0xc000b2f130) Go away received\nI0128 21:56:16.217082 2433 log.go:172] (0xc000b2f130) (0xc000a52320) Stream removed, broadcasting: 1\nI0128 21:56:16.217100 2433 log.go:172] (0xc000b2f130) (0xc0009f8320) Stream removed, broadcasting: 3\nI0128 21:56:16.217109 2433 log.go:172] (0xc000b2f130) (0xc0004d3540) Stream removed, broadcasting: 5\n" Jan 28 21:56:16.229: INFO: stdout: "" Jan 28 21:56:16.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9459 execpodxl6fm -- /bin/sh -x -c nc -zv -t -w 2 10.96.1.234 30929' Jan 28 21:56:16.633: INFO: stderr: "I0128 21:56:16.414379 2456 log.go:172] (0xc000b3d290) (0xc000b00780) Create stream\nI0128 21:56:16.414700 2456 log.go:172] (0xc000b3d290) (0xc000b00780) Stream added, broadcasting: 1\nI0128 21:56:16.419013 2456 log.go:172] (0xc000b3d290) Reply frame received for 1\nI0128 21:56:16.419122 2456 log.go:172] (0xc000b3d290) (0xc000b00820) Create stream\nI0128 21:56:16.419141 2456 log.go:172] (0xc000b3d290) (0xc000b00820) Stream added, broadcasting: 3\nI0128 21:56:16.420625 2456 log.go:172] (0xc000b3d290) Reply frame received for 3\nI0128 21:56:16.420662 2456 log.go:172] (0xc000b3d290) (0xc00029f2c0) Create stream\nI0128 21:56:16.420672 2456 log.go:172] (0xc000b3d290) (0xc00029f2c0) Stream added, broadcasting: 5\nI0128 21:56:16.422237 2456 log.go:172] (0xc000b3d290) Reply frame received for 5\nI0128 21:56:16.528517 2456 log.go:172] (0xc000b3d290) Data frame received for 5\nI0128 21:56:16.528619 2456 log.go:172] (0xc00029f2c0) (5) Data frame handling\nI0128 21:56:16.528656 2456 log.go:172] (0xc00029f2c0) (5) Data frame sent\nI0128 21:56:16.528668 2456 log.go:172] (0xc000b3d290) Data frame received for 5\nI0128 21:56:16.528676 2456 log.go:172] (0xc00029f2c0) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.1.234 30929\nConnection to 10.96.1.234 30929 port [tcp/30929] succeeded!\nI0128 21:56:16.528719 2456 log.go:172] (0xc00029f2c0) (5) Data frame sent\nI0128 21:56:16.621167 2456 log.go:172] (0xc000b3d290) (0xc000b00820) Stream removed, broadcasting: 3\nI0128 21:56:16.621274 2456 log.go:172] (0xc000b3d290) (0xc00029f2c0) Stream removed, broadcasting: 5\nI0128 21:56:16.621404 2456 log.go:172] (0xc000b3d290) Data frame received for 1\nI0128 21:56:16.621418 2456 log.go:172] (0xc000b00780) (1) Data frame handling\nI0128 21:56:16.621434 2456 log.go:172] (0xc000b00780) (1) Data frame sent\nI0128 21:56:16.621444 2456 log.go:172] (0xc000b3d290) (0xc000b00780) Stream removed, broadcasting: 1\nI0128 21:56:16.621459 2456 log.go:172] (0xc000b3d290) Go away received\nI0128 21:56:16.622324 2456 log.go:172] (0xc000b3d290) (0xc000b00780) Stream removed, broadcasting: 1\nI0128 21:56:16.622338 2456 log.go:172] (0xc000b3d290) (0xc000b00820) Stream removed, broadcasting: 3\nI0128 21:56:16.622345 2456 log.go:172] (0xc000b3d290) (0xc00029f2c0) Stream removed, broadcasting: 5\n" Jan 28 21:56:16.633: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:56:16.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9459" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:23.247 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":95,"skipped":1510,"failed":0} SSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:56:16.647: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-8730/configmap-test-612e1fe5-4a3f-4eb5-b3bc-291917f296a2 STEP: Creating a pod to test consume configMaps Jan 28 21:56:16.790: INFO: Waiting up to 5m0s for pod "pod-configmaps-9db72ed7-ea9c-4ed5-a6a3-e74b17bdd99f" in namespace "configmap-8730" to be "success or failure" Jan 28 21:56:16.800: INFO: Pod "pod-configmaps-9db72ed7-ea9c-4ed5-a6a3-e74b17bdd99f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.732904ms Jan 28 21:56:18.809: INFO: Pod "pod-configmaps-9db72ed7-ea9c-4ed5-a6a3-e74b17bdd99f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018460656s Jan 28 21:56:20.816: INFO: Pod "pod-configmaps-9db72ed7-ea9c-4ed5-a6a3-e74b17bdd99f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025350457s Jan 28 21:56:22.822: INFO: Pod "pod-configmaps-9db72ed7-ea9c-4ed5-a6a3-e74b17bdd99f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031310491s Jan 28 21:56:24.827: INFO: Pod "pod-configmaps-9db72ed7-ea9c-4ed5-a6a3-e74b17bdd99f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.036963568s Jan 28 21:56:26.836: INFO: Pod "pod-configmaps-9db72ed7-ea9c-4ed5-a6a3-e74b17bdd99f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.045860061s Jan 28 21:56:28.844: INFO: Pod "pod-configmaps-9db72ed7-ea9c-4ed5-a6a3-e74b17bdd99f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.053924542s STEP: Saw pod success Jan 28 21:56:28.845: INFO: Pod "pod-configmaps-9db72ed7-ea9c-4ed5-a6a3-e74b17bdd99f" satisfied condition "success or failure" Jan 28 21:56:28.856: INFO: Trying to get logs from node jerma-node pod pod-configmaps-9db72ed7-ea9c-4ed5-a6a3-e74b17bdd99f container env-test: STEP: delete the pod Jan 28 21:56:29.415: INFO: Waiting for pod pod-configmaps-9db72ed7-ea9c-4ed5-a6a3-e74b17bdd99f to disappear Jan 28 21:56:29.425: INFO: Pod pod-configmaps-9db72ed7-ea9c-4ed5-a6a3-e74b17bdd99f no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:56:29.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8730" for this suite. • [SLOW TEST:12.843 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":96,"skipped":1517,"failed":0} SSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:56:29.491: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jan 28 21:56:29.796: INFO: Number of nodes with available pods: 0 Jan 28 21:56:29.796: INFO: Node jerma-node is running more than one daemon pod Jan 28 21:56:31.749: INFO: Number of nodes with available pods: 0 Jan 28 21:56:31.750: INFO: Node jerma-node is running more than one daemon pod Jan 28 21:56:31.958: INFO: Number of nodes with available pods: 0 Jan 28 21:56:31.958: INFO: Node jerma-node is running more than one daemon pod Jan 28 21:56:32.811: INFO: Number of nodes with available pods: 0 Jan 28 21:56:32.811: INFO: Node jerma-node is running more than one daemon pod Jan 28 21:56:33.911: INFO: Number of nodes with available pods: 0 Jan 28 21:56:33.912: INFO: Node jerma-node is running more than one daemon pod Jan 28 21:56:35.751: INFO: Number of nodes with available pods: 0 Jan 28 21:56:35.751: INFO: Node jerma-node is running more than one daemon pod Jan 28 21:56:36.997: INFO: Number of nodes with available pods: 0 Jan 28 21:56:36.998: INFO: Node jerma-node is running more than one daemon pod Jan 28 21:56:37.995: INFO: Number of nodes with available pods: 0 Jan 28 21:56:37.996: INFO: Node jerma-node is running more than one daemon pod Jan 28 21:56:38.805: INFO: Number of nodes with available pods: 1 Jan 28 21:56:38.805: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 28 21:56:39.826: INFO: Number of nodes with available pods: 2 Jan 28 21:56:39.826: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Jan 28 21:56:39.933: INFO: Number of nodes with available pods: 1 Jan 28 21:56:39.933: INFO: Node jerma-node is running more than one daemon pod Jan 28 21:56:40.948: INFO: Number of nodes with available pods: 1 Jan 28 21:56:40.948: INFO: Node jerma-node is running more than one daemon pod Jan 28 21:56:41.950: INFO: Number of nodes with available pods: 1 Jan 28 21:56:41.950: INFO: Node jerma-node is running more than one daemon pod Jan 28 21:56:42.955: INFO: Number of nodes with available pods: 1 Jan 28 21:56:42.956: INFO: Node jerma-node is running more than one daemon pod Jan 28 21:56:43.949: INFO: Number of nodes with available pods: 1 Jan 28 21:56:43.949: INFO: Node jerma-node is running more than one daemon pod Jan 28 21:56:44.951: INFO: Number of nodes with available pods: 1 Jan 28 21:56:44.951: INFO: Node jerma-node is running more than one daemon pod Jan 28 21:56:45.959: INFO: Number of nodes with available pods: 1 Jan 28 21:56:45.959: INFO: Node jerma-node is running more than one daemon pod Jan 28 21:56:46.957: INFO: Number of nodes with available pods: 1 Jan 28 21:56:46.957: INFO: Node jerma-node is running more than one daemon pod Jan 28 21:56:47.949: INFO: Number of nodes with available pods: 1 Jan 28 21:56:47.950: INFO: Node jerma-node is running more than one daemon pod Jan 28 21:56:48.952: INFO: Number of nodes with available pods: 1 Jan 28 21:56:48.952: INFO: Node jerma-node is running more than one daemon pod Jan 28 21:56:49.954: INFO: Number of nodes with available pods: 1 Jan 28 21:56:49.954: INFO: Node jerma-node is running more than one daemon pod Jan 28 21:56:50.974: INFO: Number of nodes with available pods: 1 Jan 28 21:56:50.975: INFO: Node jerma-node is running more than one daemon pod Jan 28 21:56:51.950: INFO: Number of nodes with available pods: 2 Jan 28 21:56:51.950: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2740, will wait for the garbage collector to delete the pods Jan 28 21:56:52.018: INFO: Deleting DaemonSet.extensions daemon-set took: 9.896204ms Jan 28 21:56:52.319: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.5475ms Jan 28 21:57:03.215: INFO: Number of nodes with available pods: 0 Jan 28 21:57:03.215: INFO: Number of running nodes: 0, number of available pods: 0 Jan 28 21:57:03.219: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2740/daemonsets","resourceVersion":"4968385"},"items":null} Jan 28 21:57:03.222: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2740/pods","resourceVersion":"4968385"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:57:03.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2740" for this suite. • [SLOW TEST:33.753 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":97,"skipped":1521,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:57:03.245: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on node default medium Jan 28 21:57:03.356: INFO: Waiting up to 5m0s for pod "pod-1343b545-3ca3-4f05-b737-2e76b2f4c786" in namespace "emptydir-5087" to be "success or failure" Jan 28 21:57:03.380: INFO: Pod "pod-1343b545-3ca3-4f05-b737-2e76b2f4c786": Phase="Pending", Reason="", readiness=false. Elapsed: 24.411371ms Jan 28 21:57:05.390: INFO: Pod "pod-1343b545-3ca3-4f05-b737-2e76b2f4c786": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034075732s Jan 28 21:57:07.400: INFO: Pod "pod-1343b545-3ca3-4f05-b737-2e76b2f4c786": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044310273s Jan 28 21:57:09.408: INFO: Pod "pod-1343b545-3ca3-4f05-b737-2e76b2f4c786": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052159785s Jan 28 21:57:11.416: INFO: Pod "pod-1343b545-3ca3-4f05-b737-2e76b2f4c786": Phase="Running", Reason="", readiness=true. Elapsed: 8.060442679s Jan 28 21:57:13.426: INFO: Pod "pod-1343b545-3ca3-4f05-b737-2e76b2f4c786": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.069849133s STEP: Saw pod success Jan 28 21:57:13.426: INFO: Pod "pod-1343b545-3ca3-4f05-b737-2e76b2f4c786" satisfied condition "success or failure" Jan 28 21:57:13.430: INFO: Trying to get logs from node jerma-node pod pod-1343b545-3ca3-4f05-b737-2e76b2f4c786 container test-container: STEP: delete the pod Jan 28 21:57:13.485: INFO: Waiting for pod pod-1343b545-3ca3-4f05-b737-2e76b2f4c786 to disappear Jan 28 21:57:13.489: INFO: Pod pod-1343b545-3ca3-4f05-b737-2e76b2f4c786 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:57:13.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5087" for this suite. • [SLOW TEST:10.256 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":98,"skipped":1542,"failed":0} S ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:57:13.501: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 28 21:57:13.610: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-69b95670-798b-4a04-9009-2e2878565fd5" in namespace "security-context-test-5168" to be "success or failure" Jan 28 21:57:13.643: INFO: Pod "busybox-readonly-false-69b95670-798b-4a04-9009-2e2878565fd5": Phase="Pending", Reason="", readiness=false. Elapsed: 32.919701ms Jan 28 21:57:15.656: INFO: Pod "busybox-readonly-false-69b95670-798b-4a04-9009-2e2878565fd5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045728252s Jan 28 21:57:17.664: INFO: Pod "busybox-readonly-false-69b95670-798b-4a04-9009-2e2878565fd5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05396402s Jan 28 21:57:19.673: INFO: Pod "busybox-readonly-false-69b95670-798b-4a04-9009-2e2878565fd5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062658486s Jan 28 21:57:21.680: INFO: Pod "busybox-readonly-false-69b95670-798b-4a04-9009-2e2878565fd5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.069656282s Jan 28 21:57:21.680: INFO: Pod "busybox-readonly-false-69b95670-798b-4a04-9009-2e2878565fd5" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:57:21.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5168" for this suite. • [SLOW TEST:8.192 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 When creating a pod with readOnlyRootFilesystem /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:164 should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":99,"skipped":1543,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:57:21.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 28 21:57:22.718: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 28 21:57:24.742: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845442, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845442, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845442, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845442, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 28 21:57:27.972: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845442, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845442, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845442, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845442, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 28 21:57:28.864: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845442, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845442, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845442, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845442, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 28 21:57:30.753: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845442, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845442, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845442, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845442, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 28 21:57:33.792: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:57:33.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4864" for this suite. STEP: Destroying namespace "webhook-4864-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.406 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":100,"skipped":1569,"failed":0} SSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:57:34.102: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating replication controller my-hostname-basic-fbd4edaf-2c9b-4647-a219-e382eda2be8f Jan 28 21:57:34.208: INFO: Pod name my-hostname-basic-fbd4edaf-2c9b-4647-a219-e382eda2be8f: Found 0 pods out of 1 Jan 28 21:57:39.239: INFO: Pod name my-hostname-basic-fbd4edaf-2c9b-4647-a219-e382eda2be8f: Found 1 pods out of 1 Jan 28 21:57:39.239: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-fbd4edaf-2c9b-4647-a219-e382eda2be8f" are running Jan 28 21:57:43.305: INFO: Pod "my-hostname-basic-fbd4edaf-2c9b-4647-a219-e382eda2be8f-vmrg8" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-28 21:57:34 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-28 21:57:34 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-fbd4edaf-2c9b-4647-a219-e382eda2be8f]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-28 21:57:34 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-fbd4edaf-2c9b-4647-a219-e382eda2be8f]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-28 21:57:34 +0000 UTC Reason: Message:}]) Jan 28 21:57:43.305: INFO: Trying to dial the pod Jan 28 21:57:48.337: INFO: Controller my-hostname-basic-fbd4edaf-2c9b-4647-a219-e382eda2be8f: Got expected result from replica 1 [my-hostname-basic-fbd4edaf-2c9b-4647-a219-e382eda2be8f-vmrg8]: "my-hostname-basic-fbd4edaf-2c9b-4647-a219-e382eda2be8f-vmrg8", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:57:48.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-422" for this suite. • [SLOW TEST:14.262 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":101,"skipped":1574,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:57:48.364: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating pod Jan 28 21:57:56.461: INFO: Pod pod-hostip-52b3474d-db45-4971-ac5c-92368c90b296 has hostIP: 10.96.2.250 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:57:56.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1716" for this suite. • [SLOW TEST:8.113 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":102,"skipped":1586,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:57:56.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 28 21:57:57.624: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 28 21:57:59.647: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845477, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845477, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845477, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845477, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 28 21:58:01.688: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845477, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845477, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845477, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845477, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 28 21:58:03.671: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845477, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845477, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845477, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845477, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 28 21:58:05.658: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845477, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845477, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845477, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845477, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 28 21:58:08.685: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:58:21.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6561" for this suite. STEP: Destroying namespace "webhook-6561-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:24.691 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":103,"skipped":1603,"failed":0} [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:58:21.170: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-4213 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 28 21:58:21.269: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 28 21:59:01.480: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-4213 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 28 21:59:01.480: INFO: >>> kubeConfig: /root/.kube/config I0128 21:59:01.537193 8 log.go:172] (0xc000ffca50) (0xc002039720) Create stream I0128 21:59:01.537307 8 log.go:172] (0xc000ffca50) (0xc002039720) Stream added, broadcasting: 1 I0128 21:59:01.542493 8 log.go:172] (0xc000ffca50) Reply frame received for 1 I0128 21:59:01.542543 8 log.go:172] (0xc000ffca50) (0xc00137a280) Create stream I0128 21:59:01.542587 8 log.go:172] (0xc000ffca50) (0xc00137a280) Stream added, broadcasting: 3 I0128 21:59:01.544088 8 log.go:172] (0xc000ffca50) Reply frame received for 3 I0128 21:59:01.544147 8 log.go:172] (0xc000ffca50) (0xc001778820) Create stream I0128 21:59:01.544177 8 log.go:172] (0xc000ffca50) (0xc001778820) Stream added, broadcasting: 5 I0128 21:59:01.546762 8 log.go:172] (0xc000ffca50) Reply frame received for 5 I0128 21:59:01.644530 8 log.go:172] (0xc000ffca50) Data frame received for 3 I0128 21:59:01.644724 8 log.go:172] (0xc00137a280) (3) Data frame handling I0128 21:59:01.644752 8 log.go:172] (0xc00137a280) (3) Data frame sent I0128 21:59:01.717992 8 log.go:172] (0xc000ffca50) Data frame received for 1 I0128 21:59:01.718147 8 log.go:172] (0xc002039720) (1) Data frame handling I0128 21:59:01.718169 8 log.go:172] (0xc002039720) (1) Data frame sent I0128 21:59:01.718188 8 log.go:172] (0xc000ffca50) (0xc002039720) Stream removed, broadcasting: 1 I0128 21:59:01.718433 8 log.go:172] (0xc000ffca50) (0xc00137a280) Stream removed, broadcasting: 3 I0128 21:59:01.718465 8 log.go:172] (0xc000ffca50) (0xc001778820) Stream removed, broadcasting: 5 I0128 21:59:01.718504 8 log.go:172] (0xc000ffca50) (0xc002039720) Stream removed, broadcasting: 1 I0128 21:59:01.718520 8 log.go:172] (0xc000ffca50) (0xc00137a280) Stream removed, broadcasting: 3 I0128 21:59:01.718535 8 log.go:172] (0xc000ffca50) (0xc001778820) Stream removed, broadcasting: 5 Jan 28 21:59:01.718: INFO: Waiting for responses: map[] I0128 21:59:01.719257 8 log.go:172] (0xc000ffca50) Go away received Jan 28 21:59:01.722: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-4213 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 28 21:59:01.722: INFO: >>> kubeConfig: /root/.kube/config I0128 21:59:01.759951 8 log.go:172] (0xc0028f2c60) (0xc00137a820) Create stream I0128 21:59:01.760231 8 log.go:172] (0xc0028f2c60) (0xc00137a820) Stream added, broadcasting: 1 I0128 21:59:01.765584 8 log.go:172] (0xc0028f2c60) Reply frame received for 1 I0128 21:59:01.765612 8 log.go:172] (0xc0028f2c60) (0xc0017788c0) Create stream I0128 21:59:01.765621 8 log.go:172] (0xc0028f2c60) (0xc0017788c0) Stream added, broadcasting: 3 I0128 21:59:01.768022 8 log.go:172] (0xc0028f2c60) Reply frame received for 3 I0128 21:59:01.768073 8 log.go:172] (0xc0028f2c60) (0xc001422000) Create stream I0128 21:59:01.768089 8 log.go:172] (0xc0028f2c60) (0xc001422000) Stream added, broadcasting: 5 I0128 21:59:01.769867 8 log.go:172] (0xc0028f2c60) Reply frame received for 5 I0128 21:59:01.878112 8 log.go:172] (0xc0028f2c60) Data frame received for 3 I0128 21:59:01.878225 8 log.go:172] (0xc0017788c0) (3) Data frame handling I0128 21:59:01.878242 8 log.go:172] (0xc0017788c0) (3) Data frame sent I0128 21:59:01.960992 8 log.go:172] (0xc0028f2c60) Data frame received for 1 I0128 21:59:01.961113 8 log.go:172] (0xc0028f2c60) (0xc0017788c0) Stream removed, broadcasting: 3 I0128 21:59:01.961167 8 log.go:172] (0xc00137a820) (1) Data frame handling I0128 21:59:01.961180 8 log.go:172] (0xc00137a820) (1) Data frame sent I0128 21:59:01.961232 8 log.go:172] (0xc0028f2c60) (0xc001422000) Stream removed, broadcasting: 5 I0128 21:59:01.961283 8 log.go:172] (0xc0028f2c60) (0xc00137a820) Stream removed, broadcasting: 1 I0128 21:59:01.961309 8 log.go:172] (0xc0028f2c60) Go away received I0128 21:59:01.961779 8 log.go:172] (0xc0028f2c60) (0xc00137a820) Stream removed, broadcasting: 1 I0128 21:59:01.961797 8 log.go:172] (0xc0028f2c60) (0xc0017788c0) Stream removed, broadcasting: 3 I0128 21:59:01.961809 8 log.go:172] (0xc0028f2c60) (0xc001422000) Stream removed, broadcasting: 5 Jan 28 21:59:01.962: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:59:01.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4213" for this suite. • [SLOW TEST:40.803 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":104,"skipped":1603,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:59:01.976: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name projected-secret-test-ab2ced89-c89b-488e-b84f-a5de8bd30365 STEP: Creating a pod to test consume secrets Jan 28 21:59:02.171: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-305e5467-3f2e-46bc-81d4-ac3ff4601b7d" in namespace "projected-8746" to be "success or failure" Jan 28 21:59:02.210: INFO: Pod "pod-projected-secrets-305e5467-3f2e-46bc-81d4-ac3ff4601b7d": Phase="Pending", Reason="", readiness=false. Elapsed: 39.189478ms Jan 28 21:59:04.215: INFO: Pod "pod-projected-secrets-305e5467-3f2e-46bc-81d4-ac3ff4601b7d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044572841s Jan 28 21:59:06.223: INFO: Pod "pod-projected-secrets-305e5467-3f2e-46bc-81d4-ac3ff4601b7d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052620274s Jan 28 21:59:08.245: INFO: Pod "pod-projected-secrets-305e5467-3f2e-46bc-81d4-ac3ff4601b7d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.074663015s Jan 28 21:59:10.263: INFO: Pod "pod-projected-secrets-305e5467-3f2e-46bc-81d4-ac3ff4601b7d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.092556044s Jan 28 21:59:12.340: INFO: Pod "pod-projected-secrets-305e5467-3f2e-46bc-81d4-ac3ff4601b7d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.169727358s Jan 28 21:59:14.347: INFO: Pod "pod-projected-secrets-305e5467-3f2e-46bc-81d4-ac3ff4601b7d": Phase="Pending", Reason="", readiness=false. Elapsed: 12.176117892s Jan 28 21:59:16.353: INFO: Pod "pod-projected-secrets-305e5467-3f2e-46bc-81d4-ac3ff4601b7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.181842921s STEP: Saw pod success Jan 28 21:59:16.353: INFO: Pod "pod-projected-secrets-305e5467-3f2e-46bc-81d4-ac3ff4601b7d" satisfied condition "success or failure" Jan 28 21:59:16.355: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-305e5467-3f2e-46bc-81d4-ac3ff4601b7d container secret-volume-test: STEP: delete the pod Jan 28 21:59:16.414: INFO: Waiting for pod pod-projected-secrets-305e5467-3f2e-46bc-81d4-ac3ff4601b7d to disappear Jan 28 21:59:16.508: INFO: Pod pod-projected-secrets-305e5467-3f2e-46bc-81d4-ac3ff4601b7d no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:59:16.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8746" for this suite. • [SLOW TEST:14.609 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":105,"skipped":1705,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:59:16.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:59:34.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4698" for this suite. • [SLOW TEST:17.476 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":106,"skipped":1711,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:59:34.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium Jan 28 21:59:34.169: INFO: Waiting up to 5m0s for pod "pod-490abe18-ee36-4319-8a33-c1f2ebd16343" in namespace "emptydir-2699" to be "success or failure" Jan 28 21:59:34.193: INFO: Pod "pod-490abe18-ee36-4319-8a33-c1f2ebd16343": Phase="Pending", Reason="", readiness=false. Elapsed: 23.916221ms Jan 28 21:59:36.199: INFO: Pod "pod-490abe18-ee36-4319-8a33-c1f2ebd16343": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029573605s Jan 28 21:59:38.206: INFO: Pod "pod-490abe18-ee36-4319-8a33-c1f2ebd16343": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037074045s Jan 28 21:59:40.212: INFO: Pod "pod-490abe18-ee36-4319-8a33-c1f2ebd16343": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043133815s Jan 28 21:59:42.243: INFO: Pod "pod-490abe18-ee36-4319-8a33-c1f2ebd16343": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.07416486s STEP: Saw pod success Jan 28 21:59:42.243: INFO: Pod "pod-490abe18-ee36-4319-8a33-c1f2ebd16343" satisfied condition "success or failure" Jan 28 21:59:42.248: INFO: Trying to get logs from node jerma-node pod pod-490abe18-ee36-4319-8a33-c1f2ebd16343 container test-container: STEP: delete the pod Jan 28 21:59:42.641: INFO: Waiting for pod pod-490abe18-ee36-4319-8a33-c1f2ebd16343 to disappear Jan 28 21:59:42.655: INFO: Pod pod-490abe18-ee36-4319-8a33-c1f2ebd16343 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:59:42.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2699" for this suite. • [SLOW TEST:8.608 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":107,"skipped":1722,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:59:42.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-18afd26a-0c73-48b0-925b-b7e9e02684c9 STEP: Creating a pod to test consume secrets Jan 28 21:59:42.821: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8397271e-b1c9-488f-a984-8a13a51d5f40" in namespace "projected-9433" to be "success or failure" Jan 28 21:59:43.190: INFO: Pod "pod-projected-secrets-8397271e-b1c9-488f-a984-8a13a51d5f40": Phase="Pending", Reason="", readiness=false. Elapsed: 368.64957ms Jan 28 21:59:45.203: INFO: Pod "pod-projected-secrets-8397271e-b1c9-488f-a984-8a13a51d5f40": Phase="Pending", Reason="", readiness=false. Elapsed: 2.381642726s Jan 28 21:59:47.214: INFO: Pod "pod-projected-secrets-8397271e-b1c9-488f-a984-8a13a51d5f40": Phase="Pending", Reason="", readiness=false. Elapsed: 4.39230838s Jan 28 21:59:49.220: INFO: Pod "pod-projected-secrets-8397271e-b1c9-488f-a984-8a13a51d5f40": Phase="Pending", Reason="", readiness=false. Elapsed: 6.398936434s Jan 28 21:59:51.231: INFO: Pod "pod-projected-secrets-8397271e-b1c9-488f-a984-8a13a51d5f40": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.40920141s STEP: Saw pod success Jan 28 21:59:51.231: INFO: Pod "pod-projected-secrets-8397271e-b1c9-488f-a984-8a13a51d5f40" satisfied condition "success or failure" Jan 28 21:59:51.235: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-8397271e-b1c9-488f-a984-8a13a51d5f40 container projected-secret-volume-test: STEP: delete the pod Jan 28 21:59:51.544: INFO: Waiting for pod pod-projected-secrets-8397271e-b1c9-488f-a984-8a13a51d5f40 to disappear Jan 28 21:59:51.556: INFO: Pod pod-projected-secrets-8397271e-b1c9-488f-a984-8a13a51d5f40 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 21:59:51.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9433" for this suite. • [SLOW TEST:8.903 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":108,"skipped":1733,"failed":0} SSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 21:59:51.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8213.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8213.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8213.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8213.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8213.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-8213.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8213.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-8213.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8213.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-8213.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8213.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-8213.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8213.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 102.71.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.71.102_udp@PTR;check="$$(dig +tcp +noall +answer +search 102.71.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.71.102_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8213.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8213.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8213.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8213.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8213.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-8213.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8213.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-8213.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8213.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-8213.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8213.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-8213.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8213.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 102.71.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.71.102_udp@PTR;check="$$(dig +tcp +noall +answer +search 102.71.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.71.102_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 28 22:00:03.869: INFO: Unable to read wheezy_udp@dns-test-service.dns-8213.svc.cluster.local from pod dns-8213/dns-test-0664de54-d4bb-4840-8856-04728ddb9dec: the server could not find the requested resource (get pods dns-test-0664de54-d4bb-4840-8856-04728ddb9dec) Jan 28 22:00:03.878: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8213.svc.cluster.local from pod dns-8213/dns-test-0664de54-d4bb-4840-8856-04728ddb9dec: the server could not find the requested resource (get pods dns-test-0664de54-d4bb-4840-8856-04728ddb9dec) Jan 28 22:00:03.892: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8213.svc.cluster.local from pod dns-8213/dns-test-0664de54-d4bb-4840-8856-04728ddb9dec: the server could not find the requested resource (get pods dns-test-0664de54-d4bb-4840-8856-04728ddb9dec) Jan 28 22:00:03.903: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8213.svc.cluster.local from pod dns-8213/dns-test-0664de54-d4bb-4840-8856-04728ddb9dec: the server could not find the requested resource (get pods dns-test-0664de54-d4bb-4840-8856-04728ddb9dec) Jan 28 22:00:03.956: INFO: Unable to read jessie_udp@dns-test-service.dns-8213.svc.cluster.local from pod dns-8213/dns-test-0664de54-d4bb-4840-8856-04728ddb9dec: the server could not find the requested resource (get pods dns-test-0664de54-d4bb-4840-8856-04728ddb9dec) Jan 28 22:00:03.964: INFO: Unable to read jessie_tcp@dns-test-service.dns-8213.svc.cluster.local from pod dns-8213/dns-test-0664de54-d4bb-4840-8856-04728ddb9dec: the server could not find the requested resource (get pods dns-test-0664de54-d4bb-4840-8856-04728ddb9dec) Jan 28 22:00:03.975: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8213.svc.cluster.local from pod dns-8213/dns-test-0664de54-d4bb-4840-8856-04728ddb9dec: the server could not find the requested resource (get pods dns-test-0664de54-d4bb-4840-8856-04728ddb9dec) Jan 28 22:00:03.979: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8213.svc.cluster.local from pod dns-8213/dns-test-0664de54-d4bb-4840-8856-04728ddb9dec: the server could not find the requested resource (get pods dns-test-0664de54-d4bb-4840-8856-04728ddb9dec) Jan 28 22:00:04.012: INFO: Lookups using dns-8213/dns-test-0664de54-d4bb-4840-8856-04728ddb9dec failed for: [wheezy_udp@dns-test-service.dns-8213.svc.cluster.local wheezy_tcp@dns-test-service.dns-8213.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8213.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8213.svc.cluster.local jessie_udp@dns-test-service.dns-8213.svc.cluster.local jessie_tcp@dns-test-service.dns-8213.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8213.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8213.svc.cluster.local] Jan 28 22:00:09.026: INFO: Unable to read wheezy_udp@dns-test-service.dns-8213.svc.cluster.local from pod dns-8213/dns-test-0664de54-d4bb-4840-8856-04728ddb9dec: the server could not find the requested resource (get pods dns-test-0664de54-d4bb-4840-8856-04728ddb9dec) Jan 28 22:00:09.031: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8213.svc.cluster.local from pod dns-8213/dns-test-0664de54-d4bb-4840-8856-04728ddb9dec: the server could not find the requested resource (get pods dns-test-0664de54-d4bb-4840-8856-04728ddb9dec) Jan 28 22:00:09.036: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8213.svc.cluster.local from pod dns-8213/dns-test-0664de54-d4bb-4840-8856-04728ddb9dec: the server could not find the requested resource (get pods dns-test-0664de54-d4bb-4840-8856-04728ddb9dec) Jan 28 22:00:09.041: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8213.svc.cluster.local from pod dns-8213/dns-test-0664de54-d4bb-4840-8856-04728ddb9dec: the server could not find the requested resource (get pods dns-test-0664de54-d4bb-4840-8856-04728ddb9dec) Jan 28 22:00:09.092: INFO: Unable to read jessie_udp@dns-test-service.dns-8213.svc.cluster.local from pod dns-8213/dns-test-0664de54-d4bb-4840-8856-04728ddb9dec: the server could not find the requested resource (get pods dns-test-0664de54-d4bb-4840-8856-04728ddb9dec) Jan 28 22:00:09.096: INFO: Unable to read jessie_tcp@dns-test-service.dns-8213.svc.cluster.local from pod dns-8213/dns-test-0664de54-d4bb-4840-8856-04728ddb9dec: the server could not find the requested resource (get pods dns-test-0664de54-d4bb-4840-8856-04728ddb9dec) Jan 28 22:00:09.099: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8213.svc.cluster.local from pod dns-8213/dns-test-0664de54-d4bb-4840-8856-04728ddb9dec: the server could not find the requested resource (get pods dns-test-0664de54-d4bb-4840-8856-04728ddb9dec) Jan 28 22:00:09.103: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8213.svc.cluster.local from pod dns-8213/dns-test-0664de54-d4bb-4840-8856-04728ddb9dec: the server could not find the requested resource (get pods dns-test-0664de54-d4bb-4840-8856-04728ddb9dec) Jan 28 22:00:09.129: INFO: Lookups using dns-8213/dns-test-0664de54-d4bb-4840-8856-04728ddb9dec failed for: [wheezy_udp@dns-test-service.dns-8213.svc.cluster.local wheezy_tcp@dns-test-service.dns-8213.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8213.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8213.svc.cluster.local jessie_udp@dns-test-service.dns-8213.svc.cluster.local jessie_tcp@dns-test-service.dns-8213.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8213.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8213.svc.cluster.local] Jan 28 22:00:14.054: INFO: Unable to read wheezy_udp@dns-test-service.dns-8213.svc.cluster.local from pod dns-8213/dns-test-0664de54-d4bb-4840-8856-04728ddb9dec: the server could not find the requested resource (get pods dns-test-0664de54-d4bb-4840-8856-04728ddb9dec) Jan 28 22:00:14.069: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8213.svc.cluster.local from pod dns-8213/dns-test-0664de54-d4bb-4840-8856-04728ddb9dec: the server could not find the requested resource (get pods dns-test-0664de54-d4bb-4840-8856-04728ddb9dec) Jan 28 22:00:14.076: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8213.svc.cluster.local from pod dns-8213/dns-test-0664de54-d4bb-4840-8856-04728ddb9dec: the server could not find the requested resource (get pods dns-test-0664de54-d4bb-4840-8856-04728ddb9dec) Jan 28 22:00:14.083: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8213.svc.cluster.local from pod dns-8213/dns-test-0664de54-d4bb-4840-8856-04728ddb9dec: the server could not find the requested resource (get pods dns-test-0664de54-d4bb-4840-8856-04728ddb9dec) Jan 28 22:00:14.116: INFO: Unable to read jessie_udp@dns-test-service.dns-8213.svc.cluster.local from pod dns-8213/dns-test-0664de54-d4bb-4840-8856-04728ddb9dec: the server could not find the requested resource (get pods dns-test-0664de54-d4bb-4840-8856-04728ddb9dec) Jan 28 22:00:14.120: INFO: Unable to read jessie_tcp@dns-test-service.dns-8213.svc.cluster.local from pod dns-8213/dns-test-0664de54-d4bb-4840-8856-04728ddb9dec: the server could not find the requested resource (get pods dns-test-0664de54-d4bb-4840-8856-04728ddb9dec) Jan 28 22:00:14.125: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8213.svc.cluster.local from pod dns-8213/dns-test-0664de54-d4bb-4840-8856-04728ddb9dec: the server could not find the requested resource (get pods dns-test-0664de54-d4bb-4840-8856-04728ddb9dec) Jan 28 22:00:14.130: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8213.svc.cluster.local from pod dns-8213/dns-test-0664de54-d4bb-4840-8856-04728ddb9dec: the server could not find the requested resource (get pods dns-test-0664de54-d4bb-4840-8856-04728ddb9dec) Jan 28 22:00:14.201: INFO: Lookups using dns-8213/dns-test-0664de54-d4bb-4840-8856-04728ddb9dec failed for: [wheezy_udp@dns-test-service.dns-8213.svc.cluster.local wheezy_tcp@dns-test-service.dns-8213.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8213.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8213.svc.cluster.local jessie_udp@dns-test-service.dns-8213.svc.cluster.local jessie_tcp@dns-test-service.dns-8213.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8213.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8213.svc.cluster.local] Jan 28 22:00:19.021: INFO: Unable to read wheezy_udp@dns-test-service.dns-8213.svc.cluster.local from pod dns-8213/dns-test-0664de54-d4bb-4840-8856-04728ddb9dec: the server could not find the requested resource (get pods dns-test-0664de54-d4bb-4840-8856-04728ddb9dec) Jan 28 22:00:19.033: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8213.svc.cluster.local from pod dns-8213/dns-test-0664de54-d4bb-4840-8856-04728ddb9dec: the server could not find the requested resource (get pods dns-test-0664de54-d4bb-4840-8856-04728ddb9dec) Jan 28 22:00:19.037: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8213.svc.cluster.local from pod dns-8213/dns-test-0664de54-d4bb-4840-8856-04728ddb9dec: the server could not find the requested resource (get pods dns-test-0664de54-d4bb-4840-8856-04728ddb9dec) Jan 28 22:00:19.040: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8213.svc.cluster.local from pod dns-8213/dns-test-0664de54-d4bb-4840-8856-04728ddb9dec: the server could not find the requested resource (get pods dns-test-0664de54-d4bb-4840-8856-04728ddb9dec) Jan 28 22:00:19.070: INFO: Unable to read jessie_udp@dns-test-service.dns-8213.svc.cluster.local from pod dns-8213/dns-test-0664de54-d4bb-4840-8856-04728ddb9dec: the server could not find the requested resource (get pods dns-test-0664de54-d4bb-4840-8856-04728ddb9dec) Jan 28 22:00:19.076: INFO: Unable to read jessie_tcp@dns-test-service.dns-8213.svc.cluster.local from pod dns-8213/dns-test-0664de54-d4bb-4840-8856-04728ddb9dec: the server could not find the requested resource (get pods dns-test-0664de54-d4bb-4840-8856-04728ddb9dec) Jan 28 22:00:19.079: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8213.svc.cluster.local from pod dns-8213/dns-test-0664de54-d4bb-4840-8856-04728ddb9dec: the server could not find the requested resource (get pods dns-test-0664de54-d4bb-4840-8856-04728ddb9dec) Jan 28 22:00:19.081: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8213.svc.cluster.local from pod dns-8213/dns-test-0664de54-d4bb-4840-8856-04728ddb9dec: the server could not find the requested resource (get pods dns-test-0664de54-d4bb-4840-8856-04728ddb9dec) Jan 28 22:00:19.096: INFO: Lookups using dns-8213/dns-test-0664de54-d4bb-4840-8856-04728ddb9dec failed for: [wheezy_udp@dns-test-service.dns-8213.svc.cluster.local wheezy_tcp@dns-test-service.dns-8213.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8213.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8213.svc.cluster.local jessie_udp@dns-test-service.dns-8213.svc.cluster.local jessie_tcp@dns-test-service.dns-8213.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8213.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8213.svc.cluster.local] Jan 28 22:00:24.032: INFO: Unable to read wheezy_udp@dns-test-service.dns-8213.svc.cluster.local from pod dns-8213/dns-test-0664de54-d4bb-4840-8856-04728ddb9dec: the server could not find the requested resource (get pods dns-test-0664de54-d4bb-4840-8856-04728ddb9dec) Jan 28 22:00:24.048: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8213.svc.cluster.local from pod dns-8213/dns-test-0664de54-d4bb-4840-8856-04728ddb9dec: the server could not find the requested resource (get pods dns-test-0664de54-d4bb-4840-8856-04728ddb9dec) Jan 28 22:00:24.057: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8213.svc.cluster.local from pod dns-8213/dns-test-0664de54-d4bb-4840-8856-04728ddb9dec: the server could not find the requested resource (get pods dns-test-0664de54-d4bb-4840-8856-04728ddb9dec) Jan 28 22:00:24.063: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8213.svc.cluster.local from pod dns-8213/dns-test-0664de54-d4bb-4840-8856-04728ddb9dec: the server could not find the requested resource (get pods dns-test-0664de54-d4bb-4840-8856-04728ddb9dec) Jan 28 22:00:24.086: INFO: Unable to read jessie_udp@dns-test-service.dns-8213.svc.cluster.local from pod dns-8213/dns-test-0664de54-d4bb-4840-8856-04728ddb9dec: the server could not find the requested resource (get pods dns-test-0664de54-d4bb-4840-8856-04728ddb9dec) Jan 28 22:00:24.089: INFO: Unable to read jessie_tcp@dns-test-service.dns-8213.svc.cluster.local from pod dns-8213/dns-test-0664de54-d4bb-4840-8856-04728ddb9dec: the server could not find the requested resource (get pods dns-test-0664de54-d4bb-4840-8856-04728ddb9dec) Jan 28 22:00:24.091: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8213.svc.cluster.local from pod dns-8213/dns-test-0664de54-d4bb-4840-8856-04728ddb9dec: the server could not find the requested resource (get pods dns-test-0664de54-d4bb-4840-8856-04728ddb9dec) Jan 28 22:00:24.094: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8213.svc.cluster.local from pod dns-8213/dns-test-0664de54-d4bb-4840-8856-04728ddb9dec: the server could not find the requested resource (get pods dns-test-0664de54-d4bb-4840-8856-04728ddb9dec) Jan 28 22:00:24.114: INFO: Lookups using dns-8213/dns-test-0664de54-d4bb-4840-8856-04728ddb9dec failed for: [wheezy_udp@dns-test-service.dns-8213.svc.cluster.local wheezy_tcp@dns-test-service.dns-8213.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8213.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8213.svc.cluster.local jessie_udp@dns-test-service.dns-8213.svc.cluster.local jessie_tcp@dns-test-service.dns-8213.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8213.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8213.svc.cluster.local] Jan 28 22:00:29.018: INFO: Unable to read wheezy_udp@dns-test-service.dns-8213.svc.cluster.local from pod dns-8213/dns-test-0664de54-d4bb-4840-8856-04728ddb9dec: the server could not find the requested resource (get pods dns-test-0664de54-d4bb-4840-8856-04728ddb9dec) Jan 28 22:00:29.022: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8213.svc.cluster.local from pod dns-8213/dns-test-0664de54-d4bb-4840-8856-04728ddb9dec: the server could not find the requested resource (get pods dns-test-0664de54-d4bb-4840-8856-04728ddb9dec) Jan 28 22:00:29.025: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8213.svc.cluster.local from pod dns-8213/dns-test-0664de54-d4bb-4840-8856-04728ddb9dec: the server could not find the requested resource (get pods dns-test-0664de54-d4bb-4840-8856-04728ddb9dec) Jan 28 22:00:29.028: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8213.svc.cluster.local from pod dns-8213/dns-test-0664de54-d4bb-4840-8856-04728ddb9dec: the server could not find the requested resource (get pods dns-test-0664de54-d4bb-4840-8856-04728ddb9dec) Jan 28 22:00:29.073: INFO: Unable to read jessie_udp@dns-test-service.dns-8213.svc.cluster.local from pod dns-8213/dns-test-0664de54-d4bb-4840-8856-04728ddb9dec: the server could not find the requested resource (get pods dns-test-0664de54-d4bb-4840-8856-04728ddb9dec) Jan 28 22:00:29.077: INFO: Unable to read jessie_tcp@dns-test-service.dns-8213.svc.cluster.local from pod dns-8213/dns-test-0664de54-d4bb-4840-8856-04728ddb9dec: the server could not find the requested resource (get pods dns-test-0664de54-d4bb-4840-8856-04728ddb9dec) Jan 28 22:00:29.081: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8213.svc.cluster.local from pod dns-8213/dns-test-0664de54-d4bb-4840-8856-04728ddb9dec: the server could not find the requested resource (get pods dns-test-0664de54-d4bb-4840-8856-04728ddb9dec) Jan 28 22:00:29.086: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8213.svc.cluster.local from pod dns-8213/dns-test-0664de54-d4bb-4840-8856-04728ddb9dec: the server could not find the requested resource (get pods dns-test-0664de54-d4bb-4840-8856-04728ddb9dec) Jan 28 22:00:29.112: INFO: Lookups using dns-8213/dns-test-0664de54-d4bb-4840-8856-04728ddb9dec failed for: [wheezy_udp@dns-test-service.dns-8213.svc.cluster.local wheezy_tcp@dns-test-service.dns-8213.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8213.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8213.svc.cluster.local jessie_udp@dns-test-service.dns-8213.svc.cluster.local jessie_tcp@dns-test-service.dns-8213.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8213.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8213.svc.cluster.local] Jan 28 22:00:34.083: INFO: DNS probes using dns-8213/dns-test-0664de54-d4bb-4840-8856-04728ddb9dec succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 22:00:34.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8213" for this suite. • [SLOW TEST:42.823 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":278,"completed":109,"skipped":1740,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 22:00:34.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 28 22:00:41.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2038" for this suite. • [SLOW TEST:7.193 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":110,"skipped":1751,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 28 22:00:41.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 28 22:00:41.767: INFO: (0) /api/v1/nodes/jerma-node:10250/proxy/logs/:
alternatives.log
apt/
... (200; 11.294374ms)
Jan 28 22:00:41.771: INFO: (1) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 4.381039ms)
Jan 28 22:00:41.775: INFO: (2) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 3.978495ms)
Jan 28 22:00:41.780: INFO: (3) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 4.567021ms)
Jan 28 22:00:41.786: INFO: (4) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 5.728383ms)
Jan 28 22:00:41.791: INFO: (5) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 5.217949ms)
Jan 28 22:00:41.795: INFO: (6) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 3.883601ms)
Jan 28 22:00:41.800: INFO: (7) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 4.944443ms)
Jan 28 22:00:41.804: INFO: (8) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 4.062995ms)
Jan 28 22:00:41.812: INFO: (9) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 7.165703ms)
Jan 28 22:00:41.815: INFO: (10) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 3.731556ms)
Jan 28 22:00:41.819: INFO: (11) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 3.68794ms)
Jan 28 22:00:41.829: INFO: (12) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 10.105513ms)
Jan 28 22:00:41.833: INFO: (13) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 3.380723ms)
Jan 28 22:00:41.836: INFO: (14) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 3.128362ms)
Jan 28 22:00:41.839: INFO: (15) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 2.854465ms)
Jan 28 22:00:41.842: INFO: (16) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 3.071784ms)
Jan 28 22:00:41.909: INFO: (17) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 67.389703ms)
Jan 28 22:00:41.915: INFO: (18) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 5.892562ms)
Jan 28 22:00:41.921: INFO: (19) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 5.777029ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:00:41.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-5419" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]","total":278,"completed":111,"skipped":1805,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:00:41.933: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-map-2164aa48-8ad1-4370-ae65-e814ea6b1b61
STEP: Creating a pod to test consume configMaps
Jan 28 22:00:42.111: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-08478140-dd03-4425-a4bf-e4651c4490c0" in namespace "projected-6673" to be "success or failure"
Jan 28 22:00:42.264: INFO: Pod "pod-projected-configmaps-08478140-dd03-4425-a4bf-e4651c4490c0": Phase="Pending", Reason="", readiness=false. Elapsed: 152.280324ms
Jan 28 22:00:44.273: INFO: Pod "pod-projected-configmaps-08478140-dd03-4425-a4bf-e4651c4490c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.161602177s
Jan 28 22:00:46.284: INFO: Pod "pod-projected-configmaps-08478140-dd03-4425-a4bf-e4651c4490c0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.172091476s
Jan 28 22:00:48.292: INFO: Pod "pod-projected-configmaps-08478140-dd03-4425-a4bf-e4651c4490c0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.180253833s
Jan 28 22:00:50.305: INFO: Pod "pod-projected-configmaps-08478140-dd03-4425-a4bf-e4651c4490c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.193489446s
STEP: Saw pod success
Jan 28 22:00:50.305: INFO: Pod "pod-projected-configmaps-08478140-dd03-4425-a4bf-e4651c4490c0" satisfied condition "success or failure"
Jan 28 22:00:50.311: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-08478140-dd03-4425-a4bf-e4651c4490c0 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 28 22:00:50.507: INFO: Waiting for pod pod-projected-configmaps-08478140-dd03-4425-a4bf-e4651c4490c0 to disappear
Jan 28 22:00:50.511: INFO: Pod pod-projected-configmaps-08478140-dd03-4425-a4bf-e4651c4490c0 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:00:50.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6673" for this suite.

• [SLOW TEST:8.592 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":112,"skipped":1826,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should be able to update and delete ResourceQuota. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:00:50.534: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to update and delete ResourceQuota. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a ResourceQuota
STEP: Getting a ResourceQuota
STEP: Updating a ResourceQuota
STEP: Verifying a ResourceQuota was modified
STEP: Deleting a ResourceQuota
STEP: Verifying the deleted ResourceQuota
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:00:51.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-7948" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":113,"skipped":1831,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:00:51.158: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 28 22:00:51.489: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Jan 28 22:00:51.548: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan 28 22:00:59.573: INFO: Creating deployment "test-rolling-update-deployment"
Jan 28 22:00:59.588: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Jan 28 22:00:59.656: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Jan 28 22:01:01.670: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Jan 28 22:01:01.674: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845659, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845659, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845660, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845659, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 22:01:03.680: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845659, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845659, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845660, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845659, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 22:01:05.680: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845659, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845659, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845660, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845659, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 22:01:07.680: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Jan 28 22:01:07.696: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:{test-rolling-update-deployment  deployment-928 /apis/apps/v1/namespaces/deployment-928/deployments/test-rolling-update-deployment 46a36c58-c382-49df-aa0c-988f63da3a64 4969523 1 2020-01-28 22:00:59 +0000 UTC   map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002d11eb8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-01-28 22:00:59 +0000 UTC,LastTransitionTime:2020-01-28 22:00:59 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-01-28 22:01:06 +0000 UTC,LastTransitionTime:2020-01-28 22:00:59 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Jan 28 22:01:07.702: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444  deployment-928 /apis/apps/v1/namespaces/deployment-928/replicasets/test-rolling-update-deployment-67cf4f6444 a1a85c57-8531-4984-9725-0ce2fbd3eec9 4969511 1 2020-01-28 22:00:59 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 46a36c58-c382-49df-aa0c-988f63da3a64 0xc002d38a37 0xc002d38a38}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002d38aa8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Jan 28 22:01:07.702: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Jan 28 22:01:07.702: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller  deployment-928 /apis/apps/v1/namespaces/deployment-928/replicasets/test-rolling-update-controller 7d8415e9-e93a-4a9a-bba1-8c398e96aa5f 4969522 2 2020-01-28 22:00:51 +0000 UTC   map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 46a36c58-c382-49df-aa0c-988f63da3a64 0xc002d38967 0xc002d38968}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002d389c8  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jan 28 22:01:07.707: INFO: Pod "test-rolling-update-deployment-67cf4f6444-9lwpf" is available:
&Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-9lwpf test-rolling-update-deployment-67cf4f6444- deployment-928 /api/v1/namespaces/deployment-928/pods/test-rolling-update-deployment-67cf4f6444-9lwpf 7ca4b3ed-934c-4ad2-b8da-dce5ecb54f05 4969510 0 2020-01-28 22:00:59 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 a1a85c57-8531-4984-9725-0ce2fbd3eec9 0xc001dc0717 0xc001dc0718}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n6l27,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n6l27,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n6l27,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:00:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:01:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:01:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:00:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.2,StartTime:2020-01-28 22:00:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-28 22:01:06 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://1c35cdc80d6e47e4fea3e62c11619555e83f6ed4dfd7e1aedfb71854649ec9a2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:01:07.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-928" for this suite.

• [SLOW TEST:16.567 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":114,"skipped":1848,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Servers with support for Table transformation 
  should return a 406 for a backend which does not implement metadata [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:01:07.726: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename tables
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46
[It] should return a 406 for a backend which does not implement metadata [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:01:07.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-1254" for this suite.
•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":115,"skipped":1889,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:01:07.886: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Jan 28 22:01:07.983: INFO: PodSpec: initContainers in spec.initContainers
Jan 28 22:02:09.802: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-ff171d85-23fb-4dd2-a3d8-4291c29f4d9b", GenerateName:"", Namespace:"init-container-5949", SelfLink:"/api/v1/namespaces/init-container-5949/pods/pod-init-ff171d85-23fb-4dd2-a3d8-4291c29f4d9b", UID:"1bac83d6-c894-4804-8ed3-d807905f8e96", ResourceVersion:"4969729", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63715845667, loc:(*time.Location)(0x7d100a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"983377768"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-c644n", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc004680040), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-c644n", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-c644n", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-c644n", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001dc00a8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0023520c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001dc0130)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001dc0150)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001dc0158), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001dc015c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845668, loc:(*time.Location)(0x7d100a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845668, loc:(*time.Location)(0x7d100a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845668, loc:(*time.Location)(0x7d100a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845667, loc:(*time.Location)(0x7d100a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.2.250", PodIP:"10.44.0.1", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.44.0.1"}}, StartTime:(*v1.Time)(0xc003a7c0c0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001ad6070)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001ad60e0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://d6d17e3f9e4ca9e12ee963b3f2d9fce4c5315a4e870ee6b64313d0be65052ae8", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003a7c180), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003a7c120), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc001dc01df)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:02:09.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-5949" for this suite.

• [SLOW TEST:61.958 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":116,"skipped":1910,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:02:09.844: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 28 22:02:10.616: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 28 22:02:12.636: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845730, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845730, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845730, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845730, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 22:02:14.725: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845730, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845730, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845730, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845730, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 22:02:16.641: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845730, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845730, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845730, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845730, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 28 22:02:19.698: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: fetching the /apis discovery document
STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/admissionregistration.k8s.io discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document
STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document
STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:02:19.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4520" for this suite.
STEP: Destroying namespace "webhook-4520-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:10.331 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":117,"skipped":1920,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:02:20.176: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Jan 28 22:02:21.357: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Jan 28 22:02:23.375: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845741, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845741, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845741, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845741, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 22:02:25.389: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845741, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845741, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845741, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845741, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 22:02:27.382: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845741, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845741, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845741, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845741, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 22:02:29.387: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845741, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845741, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845741, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715845741, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 28 22:02:32.407: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 28 22:02:32.414: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: Create a v2 custom resource
STEP: List CRs in v1
STEP: List CRs in v2
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:02:34.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-3931" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136

• [SLOW TEST:14.155 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":118,"skipped":1940,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:02:34.333: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1877
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jan 28 22:02:34.405: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-4398'
Jan 28 22:02:37.646: INFO: stderr: ""
Jan 28 22:02:37.646: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod is running
STEP: verifying the pod e2e-test-httpd-pod was created
Jan 28 22:02:47.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-4398 -o json'
Jan 28 22:02:47.934: INFO: stderr: ""
Jan 28 22:02:47.935: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-01-28T22:02:37Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-httpd-pod\"\n        },\n        \"name\": \"e2e-test-httpd-pod\",\n        \"namespace\": \"kubectl-4398\",\n        \"resourceVersion\": \"4969962\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-4398/pods/e2e-test-httpd-pod\",\n        \"uid\": \"451dc7ba-adb5-4bfb-b148-4b300551fce1\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-httpd-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-977sd\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"jerma-node\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-977sd\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-977sd\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-28T22:02:38Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-28T22:02:44Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-28T22:02:44Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-28T22:02:37Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://c8557842c9d3ce130907b7b10a000c7d0210cc709ebf1daf53b93ae5ceeb7a31\",\n                \"image\": \"httpd:2.4.38-alpine\",\n                \"imageID\": \"docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-httpd-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"started\": true,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-01-28T22:02:44Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.2.250\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.44.0.1\",\n        \"podIPs\": [\n            {\n                \"ip\": \"10.44.0.1\"\n            }\n        ],\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-01-28T22:02:38Z\"\n    }\n}\n"
STEP: replace the image in the pod
Jan 28 22:02:47.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-4398'
Jan 28 22:02:48.383: INFO: stderr: ""
Jan 28 22:02:48.383: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n"
STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29
[AfterEach] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1882
Jan 28 22:02:48.388: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-4398'
Jan 28 22:02:53.620: INFO: stderr: ""
Jan 28 22:02:53.621: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:02:53.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4398" for this suite.

• [SLOW TEST:19.311 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1873
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":278,"completed":119,"skipped":1953,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:02:53.646: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:02:54.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-9314" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":120,"skipped":1996,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:02:54.239: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-ddfdb871-1530-4426-972c-2b59db69cf63
STEP: Creating a pod to test consume configMaps
Jan 28 22:02:54.414: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-83a88916-1952-459e-b16c-2828a2e849b7" in namespace "projected-571" to be "success or failure"
Jan 28 22:02:54.426: INFO: Pod "pod-projected-configmaps-83a88916-1952-459e-b16c-2828a2e849b7": Phase="Pending", Reason="", readiness=false. Elapsed: 11.392048ms
Jan 28 22:02:56.439: INFO: Pod "pod-projected-configmaps-83a88916-1952-459e-b16c-2828a2e849b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024336739s
Jan 28 22:02:58.447: INFO: Pod "pod-projected-configmaps-83a88916-1952-459e-b16c-2828a2e849b7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03224723s
Jan 28 22:03:00.480: INFO: Pod "pod-projected-configmaps-83a88916-1952-459e-b16c-2828a2e849b7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.065684798s
Jan 28 22:03:02.514: INFO: Pod "pod-projected-configmaps-83a88916-1952-459e-b16c-2828a2e849b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.0992942s
STEP: Saw pod success
Jan 28 22:03:02.514: INFO: Pod "pod-projected-configmaps-83a88916-1952-459e-b16c-2828a2e849b7" satisfied condition "success or failure"
Jan 28 22:03:02.519: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-83a88916-1952-459e-b16c-2828a2e849b7 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 28 22:03:02.583: INFO: Waiting for pod pod-projected-configmaps-83a88916-1952-459e-b16c-2828a2e849b7 to disappear
Jan 28 22:03:02.709: INFO: Pod pod-projected-configmaps-83a88916-1952-459e-b16c-2828a2e849b7 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:03:02.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-571" for this suite.

• [SLOW TEST:8.489 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":121,"skipped":2033,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:03:02.730: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 28 22:03:02.918: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Jan 28 22:03:05.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8605 create -f -'
Jan 28 22:03:08.447: INFO: stderr: ""
Jan 28 22:03:08.447: INFO: stdout: "e2e-test-crd-publish-openapi-5861-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Jan 28 22:03:08.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8605 delete e2e-test-crd-publish-openapi-5861-crds test-cr'
Jan 28 22:03:08.579: INFO: stderr: ""
Jan 28 22:03:08.580: INFO: stdout: "e2e-test-crd-publish-openapi-5861-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
Jan 28 22:03:08.581: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8605 apply -f -'
Jan 28 22:03:08.880: INFO: stderr: ""
Jan 28 22:03:08.880: INFO: stdout: "e2e-test-crd-publish-openapi-5861-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Jan 28 22:03:08.880: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8605 delete e2e-test-crd-publish-openapi-5861-crds test-cr'
Jan 28 22:03:09.095: INFO: stderr: ""
Jan 28 22:03:09.095: INFO: stdout: "e2e-test-crd-publish-openapi-5861-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR without validation schema
Jan 28 22:03:09.096: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5861-crds'
Jan 28 22:03:09.444: INFO: stderr: ""
Jan 28 22:03:09.445: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-5861-crd\nVERSION:  crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:03:12.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-8605" for this suite.

• [SLOW TEST:9.594 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":122,"skipped":2061,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:03:12.324: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:04:12.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3603" for this suite.

• [SLOW TEST:60.182 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":123,"skipped":2090,"failed":0}
SSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:04:12.507: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Jan 28 22:04:18.644: INFO: &Pod{ObjectMeta:{send-events-3de36cb9-0170-4d41-87a1-9e31cbf169cd  events-1397 /api/v1/namespaces/events-1397/pods/send-events-3de36cb9-0170-4d41-87a1-9e31cbf169cd 401581ef-fd9b-448f-a47d-d99dc32cb92b 4970305 0 2020-01-28 22:04:12 +0000 UTC   map[name:foo time:583759489] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nr4tv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nr4tv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nr4tv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:04:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:04:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:04:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:04:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.2,StartTime:2020-01-28 22:04:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-28 22:04:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://d2f0b1ac4ca992df43b2075f7e16c8363b348ab2240a5aa70b9879f764c3e969,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

STEP: checking for scheduler event about the pod
Jan 28 22:04:20.655: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Jan 28 22:04:22.662: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:04:22.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-1397" for this suite.

• [SLOW TEST:10.241 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":278,"completed":124,"skipped":2100,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:04:22.749: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service externalname-service with the type=ExternalName in namespace services-6571
STEP: changing the ExternalName service to type=NodePort
STEP: creating replication controller externalname-service in namespace services-6571
I0128 22:04:23.020058       8 runners.go:189] Created replication controller with name: externalname-service, namespace: services-6571, replica count: 2
I0128 22:04:26.070935       8 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0128 22:04:29.071564       8 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0128 22:04:32.072478       8 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 28 22:04:32.072: INFO: Creating new exec pod
Jan 28 22:04:41.147: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6571 execpodskncl -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Jan 28 22:04:41.564: INFO: stderr: "I0128 22:04:41.393148    2674 log.go:172] (0xc000baee70) (0xc000647cc0) Create stream\nI0128 22:04:41.393384    2674 log.go:172] (0xc000baee70) (0xc000647cc0) Stream added, broadcasting: 1\nI0128 22:04:41.397207    2674 log.go:172] (0xc000baee70) Reply frame received for 1\nI0128 22:04:41.397262    2674 log.go:172] (0xc000baee70) (0xc000ba20a0) Create stream\nI0128 22:04:41.397270    2674 log.go:172] (0xc000baee70) (0xc000ba20a0) Stream added, broadcasting: 3\nI0128 22:04:41.398858    2674 log.go:172] (0xc000baee70) Reply frame received for 3\nI0128 22:04:41.398873    2674 log.go:172] (0xc000baee70) (0xc000ba2140) Create stream\nI0128 22:04:41.398877    2674 log.go:172] (0xc000baee70) (0xc000ba2140) Stream added, broadcasting: 5\nI0128 22:04:41.400521    2674 log.go:172] (0xc000baee70) Reply frame received for 5\nI0128 22:04:41.460623    2674 log.go:172] (0xc000baee70) Data frame received for 5\nI0128 22:04:41.460928    2674 log.go:172] (0xc000ba2140) (5) Data frame handling\nI0128 22:04:41.460975    2674 log.go:172] (0xc000ba2140) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0128 22:04:41.464845    2674 log.go:172] (0xc000baee70) Data frame received for 5\nI0128 22:04:41.464867    2674 log.go:172] (0xc000ba2140) (5) Data frame handling\nI0128 22:04:41.464884    2674 log.go:172] (0xc000ba2140) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0128 22:04:41.550407    2674 log.go:172] (0xc000baee70) Data frame received for 1\nI0128 22:04:41.550541    2674 log.go:172] (0xc000baee70) (0xc000ba20a0) Stream removed, broadcasting: 3\nI0128 22:04:41.550604    2674 log.go:172] (0xc000647cc0) (1) Data frame handling\nI0128 22:04:41.550652    2674 log.go:172] (0xc000647cc0) (1) Data frame sent\nI0128 22:04:41.551079    2674 log.go:172] (0xc000baee70) (0xc000ba2140) Stream removed, broadcasting: 5\nI0128 22:04:41.551364    2674 log.go:172] (0xc000baee70) (0xc000647cc0) Stream removed, broadcasting: 1\nI0128 22:04:41.551425    2674 log.go:172] (0xc000baee70) Go away received\nI0128 22:04:41.553211    2674 log.go:172] (0xc000baee70) (0xc000647cc0) Stream removed, broadcasting: 1\nI0128 22:04:41.553236    2674 log.go:172] (0xc000baee70) (0xc000ba20a0) Stream removed, broadcasting: 3\nI0128 22:04:41.553260    2674 log.go:172] (0xc000baee70) (0xc000ba2140) Stream removed, broadcasting: 5\n"
Jan 28 22:04:41.565: INFO: stdout: ""
Jan 28 22:04:41.567: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6571 execpodskncl -- /bin/sh -x -c nc -zv -t -w 2 10.96.172.130 80'
Jan 28 22:04:41.915: INFO: stderr: "I0128 22:04:41.717982    2693 log.go:172] (0xc000a6ef20) (0xc0008fc5a0) Create stream\nI0128 22:04:41.718154    2693 log.go:172] (0xc000a6ef20) (0xc0008fc5a0) Stream added, broadcasting: 1\nI0128 22:04:41.723216    2693 log.go:172] (0xc000a6ef20) Reply frame received for 1\nI0128 22:04:41.723291    2693 log.go:172] (0xc000a6ef20) (0xc000694640) Create stream\nI0128 22:04:41.723299    2693 log.go:172] (0xc000a6ef20) (0xc000694640) Stream added, broadcasting: 3\nI0128 22:04:41.724638    2693 log.go:172] (0xc000a6ef20) Reply frame received for 3\nI0128 22:04:41.724740    2693 log.go:172] (0xc000a6ef20) (0xc000539400) Create stream\nI0128 22:04:41.724752    2693 log.go:172] (0xc000a6ef20) (0xc000539400) Stream added, broadcasting: 5\nI0128 22:04:41.727241    2693 log.go:172] (0xc000a6ef20) Reply frame received for 5\nI0128 22:04:41.821931    2693 log.go:172] (0xc000a6ef20) Data frame received for 5\nI0128 22:04:41.822045    2693 log.go:172] (0xc000539400) (5) Data frame handling\nI0128 22:04:41.822073    2693 log.go:172] (0xc000539400) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.172.130 80\nI0128 22:04:41.823620    2693 log.go:172] (0xc000a6ef20) Data frame received for 5\nI0128 22:04:41.823633    2693 log.go:172] (0xc000539400) (5) Data frame handling\nI0128 22:04:41.823645    2693 log.go:172] (0xc000539400) (5) Data frame sent\nConnection to 10.96.172.130 80 port [tcp/http] succeeded!\nI0128 22:04:41.907202    2693 log.go:172] (0xc000a6ef20) (0xc000694640) Stream removed, broadcasting: 3\nI0128 22:04:41.907454    2693 log.go:172] (0xc000a6ef20) Data frame received for 1\nI0128 22:04:41.907470    2693 log.go:172] (0xc0008fc5a0) (1) Data frame handling\nI0128 22:04:41.907480    2693 log.go:172] (0xc0008fc5a0) (1) Data frame sent\nI0128 22:04:41.907485    2693 log.go:172] (0xc000a6ef20) (0xc0008fc5a0) Stream removed, broadcasting: 1\nI0128 22:04:41.907918    2693 log.go:172] (0xc000a6ef20) (0xc000539400) Stream removed, broadcasting: 5\nI0128 22:04:41.907955    2693 log.go:172] (0xc000a6ef20) (0xc0008fc5a0) Stream removed, broadcasting: 1\nI0128 22:04:41.907966    2693 log.go:172] (0xc000a6ef20) (0xc000694640) Stream removed, broadcasting: 3\nI0128 22:04:41.907972    2693 log.go:172] (0xc000a6ef20) (0xc000539400) Stream removed, broadcasting: 5\n"
Jan 28 22:04:41.915: INFO: stdout: ""
Jan 28 22:04:41.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6571 execpodskncl -- /bin/sh -x -c nc -zv -t -w 2 10.96.2.250 32216'
Jan 28 22:04:42.391: INFO: stderr: "I0128 22:04:42.148960    2712 log.go:172] (0xc000ba1340) (0xc000b94820) Create stream\nI0128 22:04:42.149525    2712 log.go:172] (0xc000ba1340) (0xc000b94820) Stream added, broadcasting: 1\nI0128 22:04:42.177473    2712 log.go:172] (0xc000ba1340) Reply frame received for 1\nI0128 22:04:42.178614    2712 log.go:172] (0xc000ba1340) (0xc000b94000) Create stream\nI0128 22:04:42.178705    2712 log.go:172] (0xc000ba1340) (0xc000b94000) Stream added, broadcasting: 3\nI0128 22:04:42.182865    2712 log.go:172] (0xc000ba1340) Reply frame received for 3\nI0128 22:04:42.183069    2712 log.go:172] (0xc000ba1340) (0xc0003034a0) Create stream\nI0128 22:04:42.183097    2712 log.go:172] (0xc000ba1340) (0xc0003034a0) Stream added, broadcasting: 5\nI0128 22:04:42.184850    2712 log.go:172] (0xc000ba1340) Reply frame received for 5\nI0128 22:04:42.258363    2712 log.go:172] (0xc000ba1340) Data frame received for 5\nI0128 22:04:42.258591    2712 log.go:172] (0xc0003034a0) (5) Data frame handling\nI0128 22:04:42.258680    2712 log.go:172] (0xc0003034a0) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.2.250 32216\nConnection to 10.96.2.250 32216 port [tcp/32216] succeeded!\nI0128 22:04:42.367900    2712 log.go:172] (0xc000ba1340) (0xc0003034a0) Stream removed, broadcasting: 5\nI0128 22:04:42.368367    2712 log.go:172] (0xc000ba1340) Data frame received for 1\nI0128 22:04:42.368428    2712 log.go:172] (0xc000ba1340) (0xc000b94000) Stream removed, broadcasting: 3\nI0128 22:04:42.368624    2712 log.go:172] (0xc000b94820) (1) Data frame handling\nI0128 22:04:42.368680    2712 log.go:172] (0xc000b94820) (1) Data frame sent\nI0128 22:04:42.368710    2712 log.go:172] (0xc000ba1340) (0xc000b94820) Stream removed, broadcasting: 1\nI0128 22:04:42.368757    2712 log.go:172] (0xc000ba1340) Go away received\nI0128 22:04:42.370826    2712 log.go:172] (0xc000ba1340) (0xc000b94820) Stream removed, broadcasting: 1\nI0128 22:04:42.370841    2712 log.go:172] (0xc000ba1340) (0xc000b94000) Stream removed, broadcasting: 3\nI0128 22:04:42.370851    2712 log.go:172] (0xc000ba1340) (0xc0003034a0) Stream removed, broadcasting: 5\n"
Jan 28 22:04:42.391: INFO: stdout: ""
Jan 28 22:04:42.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6571 execpodskncl -- /bin/sh -x -c nc -zv -t -w 2 10.96.1.234 32216'
Jan 28 22:04:42.799: INFO: stderr: "I0128 22:04:42.553614    2733 log.go:172] (0xc0009f8580) (0xc00090c000) Create stream\nI0128 22:04:42.554100    2733 log.go:172] (0xc0009f8580) (0xc00090c000) Stream added, broadcasting: 1\nI0128 22:04:42.562139    2733 log.go:172] (0xc0009f8580) Reply frame received for 1\nI0128 22:04:42.562220    2733 log.go:172] (0xc0009f8580) (0xc0006cdae0) Create stream\nI0128 22:04:42.562240    2733 log.go:172] (0xc0009f8580) (0xc0006cdae0) Stream added, broadcasting: 3\nI0128 22:04:42.564928    2733 log.go:172] (0xc0009f8580) Reply frame received for 3\nI0128 22:04:42.564956    2733 log.go:172] (0xc0009f8580) (0xc0006cdcc0) Create stream\nI0128 22:04:42.564964    2733 log.go:172] (0xc0009f8580) (0xc0006cdcc0) Stream added, broadcasting: 5\nI0128 22:04:42.566711    2733 log.go:172] (0xc0009f8580) Reply frame received for 5\nI0128 22:04:42.679019    2733 log.go:172] (0xc0009f8580) Data frame received for 5\nI0128 22:04:42.679314    2733 log.go:172] (0xc0006cdcc0) (5) Data frame handling\nI0128 22:04:42.679405    2733 log.go:172] (0xc0006cdcc0) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.1.234 32216\nI0128 22:04:42.682597    2733 log.go:172] (0xc0009f8580) Data frame received for 5\nI0128 22:04:42.682646    2733 log.go:172] (0xc0006cdcc0) (5) Data frame handling\nI0128 22:04:42.682681    2733 log.go:172] (0xc0006cdcc0) (5) Data frame sent\nConnection to 10.96.1.234 32216 port [tcp/32216] succeeded!\nI0128 22:04:42.786840    2733 log.go:172] (0xc0009f8580) Data frame received for 1\nI0128 22:04:42.786975    2733 log.go:172] (0xc0009f8580) (0xc0006cdcc0) Stream removed, broadcasting: 5\nI0128 22:04:42.787021    2733 log.go:172] (0xc00090c000) (1) Data frame handling\nI0128 22:04:42.787038    2733 log.go:172] (0xc00090c000) (1) Data frame sent\nI0128 22:04:42.787068    2733 log.go:172] (0xc0009f8580) (0xc0006cdae0) Stream removed, broadcasting: 3\nI0128 22:04:42.787154    2733 log.go:172] (0xc0009f8580) (0xc00090c000) Stream removed, broadcasting: 1\nI0128 22:04:42.787197    2733 log.go:172] (0xc0009f8580) Go away received\nI0128 22:04:42.788912    2733 log.go:172] (0xc0009f8580) (0xc00090c000) Stream removed, broadcasting: 1\nI0128 22:04:42.789056    2733 log.go:172] (0xc0009f8580) (0xc0006cdae0) Stream removed, broadcasting: 3\nI0128 22:04:42.789073    2733 log.go:172] (0xc0009f8580) (0xc0006cdcc0) Stream removed, broadcasting: 5\n"
Jan 28 22:04:42.799: INFO: stdout: ""
Jan 28 22:04:42.799: INFO: Cleaning up the ExternalName to NodePort test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:04:42.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6571" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:20.122 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":125,"skipped":2115,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:04:42.872: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-2799
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a new StatefulSet
Jan 28 22:04:42.988: INFO: Found 0 stateful pods, waiting for 3
Jan 28 22:04:52.995: INFO: Found 2 stateful pods, waiting for 3
Jan 28 22:05:03.020: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 28 22:05:03.020: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 28 22:05:03.020: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 28 22:05:13.014: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 28 22:05:13.014: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 28 22:05:13.014: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Jan 28 22:05:13.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2799 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 28 22:05:13.499: INFO: stderr: "I0128 22:05:13.292984    2753 log.go:172] (0xc0009620b0) (0xc000303540) Create stream\nI0128 22:05:13.293196    2753 log.go:172] (0xc0009620b0) (0xc000303540) Stream added, broadcasting: 1\nI0128 22:05:13.297529    2753 log.go:172] (0xc0009620b0) Reply frame received for 1\nI0128 22:05:13.297573    2753 log.go:172] (0xc0009620b0) (0xc00093e000) Create stream\nI0128 22:05:13.297581    2753 log.go:172] (0xc0009620b0) (0xc00093e000) Stream added, broadcasting: 3\nI0128 22:05:13.298525    2753 log.go:172] (0xc0009620b0) Reply frame received for 3\nI0128 22:05:13.298570    2753 log.go:172] (0xc0009620b0) (0xc0006f7ae0) Create stream\nI0128 22:05:13.298580    2753 log.go:172] (0xc0009620b0) (0xc0006f7ae0) Stream added, broadcasting: 5\nI0128 22:05:13.300431    2753 log.go:172] (0xc0009620b0) Reply frame received for 5\nI0128 22:05:13.381926    2753 log.go:172] (0xc0009620b0) Data frame received for 5\nI0128 22:05:13.382173    2753 log.go:172] (0xc0006f7ae0) (5) Data frame handling\nI0128 22:05:13.382314    2753 log.go:172] (0xc0006f7ae0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0128 22:05:13.417142    2753 log.go:172] (0xc0009620b0) Data frame received for 3\nI0128 22:05:13.417209    2753 log.go:172] (0xc00093e000) (3) Data frame handling\nI0128 22:05:13.417239    2753 log.go:172] (0xc00093e000) (3) Data frame sent\nI0128 22:05:13.485177    2753 log.go:172] (0xc0009620b0) Data frame received for 1\nI0128 22:05:13.485287    2753 log.go:172] (0xc0009620b0) (0xc0006f7ae0) Stream removed, broadcasting: 5\nI0128 22:05:13.485354    2753 log.go:172] (0xc000303540) (1) Data frame handling\nI0128 22:05:13.485375    2753 log.go:172] (0xc000303540) (1) Data frame sent\nI0128 22:05:13.485402    2753 log.go:172] (0xc0009620b0) (0xc00093e000) Stream removed, broadcasting: 3\nI0128 22:05:13.485439    2753 log.go:172] (0xc0009620b0) (0xc000303540) Stream removed, broadcasting: 1\nI0128 22:05:13.485491    2753 log.go:172] (0xc0009620b0) Go away received\nI0128 22:05:13.486935    2753 log.go:172] (0xc0009620b0) (0xc000303540) Stream removed, broadcasting: 1\nI0128 22:05:13.486957    2753 log.go:172] (0xc0009620b0) (0xc00093e000) Stream removed, broadcasting: 3\nI0128 22:05:13.486972    2753 log.go:172] (0xc0009620b0) (0xc0006f7ae0) Stream removed, broadcasting: 5\n"
Jan 28 22:05:13.499: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 28 22:05:13.499: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Jan 28 22:05:23.549: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Jan 28 22:05:33.622: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2799 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 28 22:05:34.080: INFO: stderr: "I0128 22:05:33.830391    2773 log.go:172] (0xc00071a6e0) (0xc000704000) Create stream\nI0128 22:05:33.830811    2773 log.go:172] (0xc00071a6e0) (0xc000704000) Stream added, broadcasting: 1\nI0128 22:05:33.855777    2773 log.go:172] (0xc00071a6e0) Reply frame received for 1\nI0128 22:05:33.855943    2773 log.go:172] (0xc00071a6e0) (0xc000704140) Create stream\nI0128 22:05:33.855961    2773 log.go:172] (0xc00071a6e0) (0xc000704140) Stream added, broadcasting: 3\nI0128 22:05:33.858947    2773 log.go:172] (0xc00071a6e0) Reply frame received for 3\nI0128 22:05:33.859065    2773 log.go:172] (0xc00071a6e0) (0xc0006b5ae0) Create stream\nI0128 22:05:33.859120    2773 log.go:172] (0xc00071a6e0) (0xc0006b5ae0) Stream added, broadcasting: 5\nI0128 22:05:33.861835    2773 log.go:172] (0xc00071a6e0) Reply frame received for 5\nI0128 22:05:33.977659    2773 log.go:172] (0xc00071a6e0) Data frame received for 3\nI0128 22:05:33.977742    2773 log.go:172] (0xc000704140) (3) Data frame handling\nI0128 22:05:33.977761    2773 log.go:172] (0xc000704140) (3) Data frame sent\nI0128 22:05:33.977787    2773 log.go:172] (0xc00071a6e0) Data frame received for 5\nI0128 22:05:33.977797    2773 log.go:172] (0xc0006b5ae0) (5) Data frame handling\nI0128 22:05:33.977816    2773 log.go:172] (0xc0006b5ae0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0128 22:05:34.061917    2773 log.go:172] (0xc00071a6e0) (0xc000704140) Stream removed, broadcasting: 3\nI0128 22:05:34.062067    2773 log.go:172] (0xc00071a6e0) Data frame received for 1\nI0128 22:05:34.062108    2773 log.go:172] (0xc000704000) (1) Data frame handling\nI0128 22:05:34.062138    2773 log.go:172] (0xc000704000) (1) Data frame sent\nI0128 22:05:34.062155    2773 log.go:172] (0xc00071a6e0) (0xc000704000) Stream removed, broadcasting: 1\nI0128 22:05:34.062169    2773 log.go:172] (0xc00071a6e0) (0xc0006b5ae0) Stream removed, broadcasting: 5\nI0128 22:05:34.062274    2773 log.go:172] (0xc00071a6e0) Go away received\nI0128 22:05:34.064143    2773 log.go:172] (0xc00071a6e0) (0xc000704000) Stream removed, broadcasting: 1\nI0128 22:05:34.064169    2773 log.go:172] (0xc00071a6e0) (0xc000704140) Stream removed, broadcasting: 3\nI0128 22:05:34.064194    2773 log.go:172] (0xc00071a6e0) (0xc0006b5ae0) Stream removed, broadcasting: 5\n"
Jan 28 22:05:34.080: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 28 22:05:34.080: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 28 22:05:44.105: INFO: Waiting for StatefulSet statefulset-2799/ss2 to complete update
Jan 28 22:05:44.105: INFO: Waiting for Pod statefulset-2799/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 28 22:05:44.105: INFO: Waiting for Pod statefulset-2799/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 28 22:05:54.126: INFO: Waiting for StatefulSet statefulset-2799/ss2 to complete update
Jan 28 22:05:54.127: INFO: Waiting for Pod statefulset-2799/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 28 22:06:04.145: INFO: Waiting for StatefulSet statefulset-2799/ss2 to complete update
Jan 28 22:06:04.145: INFO: Waiting for Pod statefulset-2799/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
STEP: Rolling back to a previous revision
Jan 28 22:06:14.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2799 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 28 22:06:14.591: INFO: stderr: "I0128 22:06:14.287872    2793 log.go:172] (0xc000adb970) (0xc0009f4820) Create stream\nI0128 22:06:14.288384    2793 log.go:172] (0xc000adb970) (0xc0009f4820) Stream added, broadcasting: 1\nI0128 22:06:14.298659    2793 log.go:172] (0xc000adb970) Reply frame received for 1\nI0128 22:06:14.298834    2793 log.go:172] (0xc000adb970) (0xc000665ae0) Create stream\nI0128 22:06:14.298892    2793 log.go:172] (0xc000adb970) (0xc000665ae0) Stream added, broadcasting: 3\nI0128 22:06:14.301567    2793 log.go:172] (0xc000adb970) Reply frame received for 3\nI0128 22:06:14.301698    2793 log.go:172] (0xc000adb970) (0xc0005b86e0) Create stream\nI0128 22:06:14.301771    2793 log.go:172] (0xc000adb970) (0xc0005b86e0) Stream added, broadcasting: 5\nI0128 22:06:14.303596    2793 log.go:172] (0xc000adb970) Reply frame received for 5\nI0128 22:06:14.386059    2793 log.go:172] (0xc000adb970) Data frame received for 5\nI0128 22:06:14.386121    2793 log.go:172] (0xc0005b86e0) (5) Data frame handling\nI0128 22:06:14.386143    2793 log.go:172] (0xc0005b86e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0128 22:06:14.427577    2793 log.go:172] (0xc000adb970) Data frame received for 3\nI0128 22:06:14.427663    2793 log.go:172] (0xc000665ae0) (3) Data frame handling\nI0128 22:06:14.427708    2793 log.go:172] (0xc000665ae0) (3) Data frame sent\nI0128 22:06:14.571034    2793 log.go:172] (0xc000adb970) (0xc000665ae0) Stream removed, broadcasting: 3\nI0128 22:06:14.572051    2793 log.go:172] (0xc000adb970) Data frame received for 1\nI0128 22:06:14.572091    2793 log.go:172] (0xc0009f4820) (1) Data frame handling\nI0128 22:06:14.572170    2793 log.go:172] (0xc0009f4820) (1) Data frame sent\nI0128 22:06:14.572210    2793 log.go:172] (0xc000adb970) (0xc0009f4820) Stream removed, broadcasting: 1\nI0128 22:06:14.573021    2793 log.go:172] (0xc000adb970) (0xc0005b86e0) Stream removed, broadcasting: 5\nI0128 22:06:14.573279    2793 log.go:172] (0xc000adb970) Go away received\nI0128 22:06:14.574187    2793 log.go:172] (0xc000adb970) (0xc0009f4820) Stream removed, broadcasting: 1\nI0128 22:06:14.574238    2793 log.go:172] (0xc000adb970) (0xc000665ae0) Stream removed, broadcasting: 3\nI0128 22:06:14.574242    2793 log.go:172] (0xc000adb970) (0xc0005b86e0) Stream removed, broadcasting: 5\n"
Jan 28 22:06:14.592: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 28 22:06:14.592: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 28 22:06:24.667: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Jan 28 22:06:34.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2799 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 28 22:06:35.152: INFO: stderr: "I0128 22:06:34.902793    2813 log.go:172] (0xc000ae3760) (0xc000b5e8c0) Create stream\nI0128 22:06:34.904458    2813 log.go:172] (0xc000ae3760) (0xc000b5e8c0) Stream added, broadcasting: 1\nI0128 22:06:34.912167    2813 log.go:172] (0xc000ae3760) Reply frame received for 1\nI0128 22:06:34.912259    2813 log.go:172] (0xc000ae3760) (0xc0006d86e0) Create stream\nI0128 22:06:34.912273    2813 log.go:172] (0xc000ae3760) (0xc0006d86e0) Stream added, broadcasting: 3\nI0128 22:06:34.913976    2813 log.go:172] (0xc000ae3760) Reply frame received for 3\nI0128 22:06:34.914023    2813 log.go:172] (0xc000ae3760) (0xc0004574a0) Create stream\nI0128 22:06:34.914038    2813 log.go:172] (0xc000ae3760) (0xc0004574a0) Stream added, broadcasting: 5\nI0128 22:06:34.918323    2813 log.go:172] (0xc000ae3760) Reply frame received for 5\nI0128 22:06:34.998362    2813 log.go:172] (0xc000ae3760) Data frame received for 3\nI0128 22:06:34.998419    2813 log.go:172] (0xc0006d86e0) (3) Data frame handling\nI0128 22:06:34.998444    2813 log.go:172] (0xc0006d86e0) (3) Data frame sent\nI0128 22:06:35.002487    2813 log.go:172] (0xc000ae3760) Data frame received for 5\nI0128 22:06:35.002585    2813 log.go:172] (0xc0004574a0) (5) Data frame handling\nI0128 22:06:35.002618    2813 log.go:172] (0xc0004574a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0128 22:06:35.142037    2813 log.go:172] (0xc000ae3760) (0xc0004574a0) Stream removed, broadcasting: 5\nI0128 22:06:35.142229    2813 log.go:172] (0xc000ae3760) Data frame received for 1\nI0128 22:06:35.142283    2813 log.go:172] (0xc000ae3760) (0xc0006d86e0) Stream removed, broadcasting: 3\nI0128 22:06:35.142325    2813 log.go:172] (0xc000b5e8c0) (1) Data frame handling\nI0128 22:06:35.142351    2813 log.go:172] (0xc000b5e8c0) (1) Data frame sent\nI0128 22:06:35.142364    2813 log.go:172] (0xc000ae3760) (0xc000b5e8c0) Stream removed, broadcasting: 1\nI0128 22:06:35.142389    2813 log.go:172] (0xc000ae3760) Go away received\nI0128 22:06:35.144004    2813 log.go:172] (0xc000ae3760) (0xc000b5e8c0) Stream removed, broadcasting: 1\nI0128 22:06:35.144018    2813 log.go:172] (0xc000ae3760) (0xc0006d86e0) Stream removed, broadcasting: 3\nI0128 22:06:35.144030    2813 log.go:172] (0xc000ae3760) (0xc0004574a0) Stream removed, broadcasting: 5\n"
Jan 28 22:06:35.152: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 28 22:06:35.152: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 28 22:06:35.212: INFO: Waiting for StatefulSet statefulset-2799/ss2 to complete update
Jan 28 22:06:35.212: INFO: Waiting for Pod statefulset-2799/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Jan 28 22:06:35.212: INFO: Waiting for Pod statefulset-2799/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Jan 28 22:06:35.212: INFO: Waiting for Pod statefulset-2799/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Jan 28 22:06:45.226: INFO: Waiting for StatefulSet statefulset-2799/ss2 to complete update
Jan 28 22:06:45.226: INFO: Waiting for Pod statefulset-2799/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Jan 28 22:06:45.226: INFO: Waiting for Pod statefulset-2799/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Jan 28 22:06:55.228: INFO: Waiting for StatefulSet statefulset-2799/ss2 to complete update
Jan 28 22:06:55.228: INFO: Waiting for Pod statefulset-2799/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Jan 28 22:06:55.228: INFO: Waiting for Pod statefulset-2799/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Jan 28 22:07:05.224: INFO: Waiting for StatefulSet statefulset-2799/ss2 to complete update
Jan 28 22:07:05.225: INFO: Waiting for Pod statefulset-2799/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Jan 28 22:07:15.222: INFO: Waiting for StatefulSet statefulset-2799/ss2 to complete update
Jan 28 22:07:15.223: INFO: Waiting for Pod statefulset-2799/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Jan 28 22:07:25.224: INFO: Waiting for StatefulSet statefulset-2799/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Jan 28 22:07:35.227: INFO: Deleting all statefulset in ns statefulset-2799
Jan 28 22:07:35.263: INFO: Scaling statefulset ss2 to 0
Jan 28 22:07:55.297: INFO: Waiting for statefulset status.replicas updated to 0
Jan 28 22:07:55.302: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:07:55.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-2799" for this suite.

• [SLOW TEST:192.469 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":126,"skipped":2136,"failed":0}
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:07:55.342: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 28 22:07:55.453: INFO: Waiting up to 5m0s for pod "downwardapi-volume-abcba394-252e-4be8-b969-3eef8207c8ab" in namespace "downward-api-2092" to be "success or failure"
Jan 28 22:07:55.471: INFO: Pod "downwardapi-volume-abcba394-252e-4be8-b969-3eef8207c8ab": Phase="Pending", Reason="", readiness=false. Elapsed: 18.149333ms
Jan 28 22:07:57.479: INFO: Pod "downwardapi-volume-abcba394-252e-4be8-b969-3eef8207c8ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025626887s
Jan 28 22:07:59.488: INFO: Pod "downwardapi-volume-abcba394-252e-4be8-b969-3eef8207c8ab": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034544773s
Jan 28 22:08:01.495: INFO: Pod "downwardapi-volume-abcba394-252e-4be8-b969-3eef8207c8ab": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042008362s
Jan 28 22:08:03.507: INFO: Pod "downwardapi-volume-abcba394-252e-4be8-b969-3eef8207c8ab": Phase="Pending", Reason="", readiness=false. Elapsed: 8.053813329s
Jan 28 22:08:05.516: INFO: Pod "downwardapi-volume-abcba394-252e-4be8-b969-3eef8207c8ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.063047599s
STEP: Saw pod success
Jan 28 22:08:05.516: INFO: Pod "downwardapi-volume-abcba394-252e-4be8-b969-3eef8207c8ab" satisfied condition "success or failure"
Jan 28 22:08:05.522: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-abcba394-252e-4be8-b969-3eef8207c8ab container client-container: 
STEP: delete the pod
Jan 28 22:08:05.597: INFO: Waiting for pod downwardapi-volume-abcba394-252e-4be8-b969-3eef8207c8ab to disappear
Jan 28 22:08:05.604: INFO: Pod downwardapi-volume-abcba394-252e-4be8-b969-3eef8207c8ab no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:08:05.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2092" for this suite.

• [SLOW TEST:10.275 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":127,"skipped":2142,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:08:05.618: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation
Jan 28 22:08:05.732: INFO: >>> kubeConfig: /root/.kube/config
STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation
Jan 28 22:08:17.562: INFO: >>> kubeConfig: /root/.kube/config
Jan 28 22:08:21.235: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:08:34.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-3095" for this suite.

• [SLOW TEST:28.643 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":128,"skipped":2167,"failed":0}
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:08:34.261: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-map-fa01e085-6130-45e0-906e-2617a0b1c7bb
STEP: Creating a pod to test consume secrets
Jan 28 22:08:34.374: INFO: Waiting up to 5m0s for pod "pod-secrets-0199e5a0-79d0-4d41-8fbf-aa7b755db542" in namespace "secrets-792" to be "success or failure"
Jan 28 22:08:34.378: INFO: Pod "pod-secrets-0199e5a0-79d0-4d41-8fbf-aa7b755db542": Phase="Pending", Reason="", readiness=false. Elapsed: 4.262541ms
Jan 28 22:08:36.385: INFO: Pod "pod-secrets-0199e5a0-79d0-4d41-8fbf-aa7b755db542": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011033046s
Jan 28 22:08:38.500: INFO: Pod "pod-secrets-0199e5a0-79d0-4d41-8fbf-aa7b755db542": Phase="Pending", Reason="", readiness=false. Elapsed: 4.125516149s
Jan 28 22:08:40.511: INFO: Pod "pod-secrets-0199e5a0-79d0-4d41-8fbf-aa7b755db542": Phase="Pending", Reason="", readiness=false. Elapsed: 6.137032291s
Jan 28 22:08:42.522: INFO: Pod "pod-secrets-0199e5a0-79d0-4d41-8fbf-aa7b755db542": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.147333789s
STEP: Saw pod success
Jan 28 22:08:42.522: INFO: Pod "pod-secrets-0199e5a0-79d0-4d41-8fbf-aa7b755db542" satisfied condition "success or failure"
Jan 28 22:08:42.525: INFO: Trying to get logs from node jerma-node pod pod-secrets-0199e5a0-79d0-4d41-8fbf-aa7b755db542 container secret-volume-test: 
STEP: delete the pod
Jan 28 22:08:42.673: INFO: Waiting for pod pod-secrets-0199e5a0-79d0-4d41-8fbf-aa7b755db542 to disappear
Jan 28 22:08:42.681: INFO: Pod pod-secrets-0199e5a0-79d0-4d41-8fbf-aa7b755db542 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:08:42.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-792" for this suite.

• [SLOW TEST:8.433 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":129,"skipped":2167,"failed":0}
S
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:08:42.694: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Jan 28 22:08:51.454: INFO: Successfully updated pod "annotationupdate116960a8-7c35-474a-a10e-c4756f043c09"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:08:53.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7270" for this suite.

• [SLOW TEST:10.823 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":130,"skipped":2168,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:08:53.518: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 28 22:08:53.616: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a89ec328-bf4c-401f-aeb2-0edbd94a784a" in namespace "downward-api-1649" to be "success or failure"
Jan 28 22:08:53.621: INFO: Pod "downwardapi-volume-a89ec328-bf4c-401f-aeb2-0edbd94a784a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.710944ms
Jan 28 22:08:55.627: INFO: Pod "downwardapi-volume-a89ec328-bf4c-401f-aeb2-0edbd94a784a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011505969s
Jan 28 22:08:57.635: INFO: Pod "downwardapi-volume-a89ec328-bf4c-401f-aeb2-0edbd94a784a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018531897s
Jan 28 22:08:59.645: INFO: Pod "downwardapi-volume-a89ec328-bf4c-401f-aeb2-0edbd94a784a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.028933593s
Jan 28 22:09:01.751: INFO: Pod "downwardapi-volume-a89ec328-bf4c-401f-aeb2-0edbd94a784a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.134853185s
Jan 28 22:09:03.759: INFO: Pod "downwardapi-volume-a89ec328-bf4c-401f-aeb2-0edbd94a784a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.14287653s
STEP: Saw pod success
Jan 28 22:09:03.759: INFO: Pod "downwardapi-volume-a89ec328-bf4c-401f-aeb2-0edbd94a784a" satisfied condition "success or failure"
Jan 28 22:09:03.763: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-a89ec328-bf4c-401f-aeb2-0edbd94a784a container client-container: 
STEP: delete the pod
Jan 28 22:09:03.961: INFO: Waiting for pod downwardapi-volume-a89ec328-bf4c-401f-aeb2-0edbd94a784a to disappear
Jan 28 22:09:03.987: INFO: Pod downwardapi-volume-a89ec328-bf4c-401f-aeb2-0edbd94a784a no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:09:03.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1649" for this suite.

• [SLOW TEST:10.494 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":131,"skipped":2179,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:09:04.014: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-downwardapi-f2dg
STEP: Creating a pod to test atomic-volume-subpath
Jan 28 22:09:04.278: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-f2dg" in namespace "subpath-2892" to be "success or failure"
Jan 28 22:09:04.310: INFO: Pod "pod-subpath-test-downwardapi-f2dg": Phase="Pending", Reason="", readiness=false. Elapsed: 32.355852ms
Jan 28 22:09:06.322: INFO: Pod "pod-subpath-test-downwardapi-f2dg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043705917s
Jan 28 22:09:08.331: INFO: Pod "pod-subpath-test-downwardapi-f2dg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052527357s
Jan 28 22:09:10.339: INFO: Pod "pod-subpath-test-downwardapi-f2dg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.060764753s
Jan 28 22:09:12.363: INFO: Pod "pod-subpath-test-downwardapi-f2dg": Phase="Running", Reason="", readiness=true. Elapsed: 8.085329557s
Jan 28 22:09:14.371: INFO: Pod "pod-subpath-test-downwardapi-f2dg": Phase="Running", Reason="", readiness=true. Elapsed: 10.092878938s
Jan 28 22:09:16.380: INFO: Pod "pod-subpath-test-downwardapi-f2dg": Phase="Running", Reason="", readiness=true. Elapsed: 12.102424027s
Jan 28 22:09:18.399: INFO: Pod "pod-subpath-test-downwardapi-f2dg": Phase="Running", Reason="", readiness=true. Elapsed: 14.120963285s
Jan 28 22:09:20.406: INFO: Pod "pod-subpath-test-downwardapi-f2dg": Phase="Running", Reason="", readiness=true. Elapsed: 16.12805717s
Jan 28 22:09:22.412: INFO: Pod "pod-subpath-test-downwardapi-f2dg": Phase="Running", Reason="", readiness=true. Elapsed: 18.133578189s
Jan 28 22:09:24.417: INFO: Pod "pod-subpath-test-downwardapi-f2dg": Phase="Running", Reason="", readiness=true. Elapsed: 20.139135952s
Jan 28 22:09:26.663: INFO: Pod "pod-subpath-test-downwardapi-f2dg": Phase="Running", Reason="", readiness=true. Elapsed: 22.38473673s
Jan 28 22:09:28.670: INFO: Pod "pod-subpath-test-downwardapi-f2dg": Phase="Running", Reason="", readiness=true. Elapsed: 24.392163646s
Jan 28 22:09:30.677: INFO: Pod "pod-subpath-test-downwardapi-f2dg": Phase="Running", Reason="", readiness=true. Elapsed: 26.399397017s
Jan 28 22:09:32.685: INFO: Pod "pod-subpath-test-downwardapi-f2dg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.406633395s
STEP: Saw pod success
Jan 28 22:09:32.685: INFO: Pod "pod-subpath-test-downwardapi-f2dg" satisfied condition "success or failure"
Jan 28 22:09:32.690: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-downwardapi-f2dg container test-container-subpath-downwardapi-f2dg: 
STEP: delete the pod
Jan 28 22:09:32.744: INFO: Waiting for pod pod-subpath-test-downwardapi-f2dg to disappear
Jan 28 22:09:32.749: INFO: Pod pod-subpath-test-downwardapi-f2dg no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-f2dg
Jan 28 22:09:32.749: INFO: Deleting pod "pod-subpath-test-downwardapi-f2dg" in namespace "subpath-2892"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:09:32.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-2892" for this suite.

• [SLOW TEST:28.767 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":132,"skipped":2208,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:09:32.785: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan 28 22:09:32.917: INFO: Waiting up to 5m0s for pod "pod-a5260e05-9212-4778-97b3-ffbce84daab4" in namespace "emptydir-4675" to be "success or failure"
Jan 28 22:09:32.936: INFO: Pod "pod-a5260e05-9212-4778-97b3-ffbce84daab4": Phase="Pending", Reason="", readiness=false. Elapsed: 19.346831ms
Jan 28 22:09:34.965: INFO: Pod "pod-a5260e05-9212-4778-97b3-ffbce84daab4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047515385s
Jan 28 22:09:36.971: INFO: Pod "pod-a5260e05-9212-4778-97b3-ffbce84daab4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053933787s
Jan 28 22:09:38.979: INFO: Pod "pod-a5260e05-9212-4778-97b3-ffbce84daab4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06209012s
Jan 28 22:09:40.983: INFO: Pod "pod-a5260e05-9212-4778-97b3-ffbce84daab4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.066170683s
STEP: Saw pod success
Jan 28 22:09:40.983: INFO: Pod "pod-a5260e05-9212-4778-97b3-ffbce84daab4" satisfied condition "success or failure"
Jan 28 22:09:40.988: INFO: Trying to get logs from node jerma-node pod pod-a5260e05-9212-4778-97b3-ffbce84daab4 container test-container: 
STEP: delete the pod
Jan 28 22:09:41.412: INFO: Waiting for pod pod-a5260e05-9212-4778-97b3-ffbce84daab4 to disappear
Jan 28 22:09:41.420: INFO: Pod pod-a5260e05-9212-4778-97b3-ffbce84daab4 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:09:41.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4675" for this suite.

• [SLOW TEST:8.659 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":133,"skipped":2224,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:09:41.445: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0128 22:09:54.056016       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 28 22:09:54.056: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:09:54.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1539" for this suite.

• [SLOW TEST:12.626 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":134,"skipped":2235,"failed":0}
SS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:09:54.072: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 28 22:10:00.665: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 28 22:10:02.778: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715846200, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715846200, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715846200, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715846200, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 22:10:04.820: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715846200, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715846200, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715846200, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715846200, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 22:10:06.865: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715846200, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715846200, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715846200, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715846200, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 22:10:08.788: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715846200, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715846200, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715846200, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715846200, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 22:10:10.792: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715846200, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715846200, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715846200, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715846200, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 22:10:12.786: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715846200, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715846200, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715846200, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715846200, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 22:10:14.785: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715846200, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715846200, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715846200, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715846200, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 28 22:10:17.871: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a validating webhook configuration
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Updating a validating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Patching a validating webhook configuration's rules to include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:10:18.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4116" for this suite.
STEP: Destroying namespace "webhook-4116-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:24.360 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":135,"skipped":2237,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:10:18.433: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Jan 28 22:10:18.812: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-2526 /api/v1/namespaces/watch-2526/configmaps/e2e-watch-test-configmap-a 74a9405f-dbf3-411c-b118-711d1ac473b0 4971948 0 2020-01-28 22:10:18 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 28 22:10:18.812: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-2526 /api/v1/namespaces/watch-2526/configmaps/e2e-watch-test-configmap-a 74a9405f-dbf3-411c-b118-711d1ac473b0 4971948 0 2020-01-28 22:10:18 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Jan 28 22:10:28.826: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-2526 /api/v1/namespaces/watch-2526/configmaps/e2e-watch-test-configmap-a 74a9405f-dbf3-411c-b118-711d1ac473b0 4971986 0 2020-01-28 22:10:18 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jan 28 22:10:28.826: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-2526 /api/v1/namespaces/watch-2526/configmaps/e2e-watch-test-configmap-a 74a9405f-dbf3-411c-b118-711d1ac473b0 4971986 0 2020-01-28 22:10:18 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Jan 28 22:10:38.861: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-2526 /api/v1/namespaces/watch-2526/configmaps/e2e-watch-test-configmap-a 74a9405f-dbf3-411c-b118-711d1ac473b0 4972010 0 2020-01-28 22:10:18 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 28 22:10:38.862: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-2526 /api/v1/namespaces/watch-2526/configmaps/e2e-watch-test-configmap-a 74a9405f-dbf3-411c-b118-711d1ac473b0 4972010 0 2020-01-28 22:10:18 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Jan 28 22:10:48.897: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-2526 /api/v1/namespaces/watch-2526/configmaps/e2e-watch-test-configmap-a 74a9405f-dbf3-411c-b118-711d1ac473b0 4972034 0 2020-01-28 22:10:18 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 28 22:10:48.898: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-2526 /api/v1/namespaces/watch-2526/configmaps/e2e-watch-test-configmap-a 74a9405f-dbf3-411c-b118-711d1ac473b0 4972034 0 2020-01-28 22:10:18 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Jan 28 22:10:58.920: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-2526 /api/v1/namespaces/watch-2526/configmaps/e2e-watch-test-configmap-b 86766fd1-b793-467d-98d8-b2a8a7bdd70e 4972058 0 2020-01-28 22:10:58 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 28 22:10:58.920: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-2526 /api/v1/namespaces/watch-2526/configmaps/e2e-watch-test-configmap-b 86766fd1-b793-467d-98d8-b2a8a7bdd70e 4972058 0 2020-01-28 22:10:58 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Jan 28 22:11:08.928: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-2526 /api/v1/namespaces/watch-2526/configmaps/e2e-watch-test-configmap-b 86766fd1-b793-467d-98d8-b2a8a7bdd70e 4972082 0 2020-01-28 22:10:58 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 28 22:11:08.928: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-2526 /api/v1/namespaces/watch-2526/configmaps/e2e-watch-test-configmap-b 86766fd1-b793-467d-98d8-b2a8a7bdd70e 4972082 0 2020-01-28 22:10:58 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:11:18.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-2526" for this suite.

• [SLOW TEST:60.524 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":136,"skipped":2273,"failed":0}
SSSS
------------------------------
[sig-cli] Kubectl client Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:11:18.957: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: validating cluster-info
Jan 28 22:11:19.039: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Jan 28 22:11:19.229: INFO: stderr: ""
Jan 28 22:11:19.229: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.193:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.193:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:11:19.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7336" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info  [Conformance]","total":278,"completed":137,"skipped":2277,"failed":0}
SSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should have a working scale subresource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:11:19.239: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-8124
[It] should have a working scale subresource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating statefulset ss in namespace statefulset-8124
Jan 28 22:11:19.390: INFO: Found 0 stateful pods, waiting for 1
Jan 28 22:11:29.399: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: getting scale subresource
STEP: updating a scale subresource
STEP: verifying the statefulset Spec.Replicas was modified
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Jan 28 22:11:29.429: INFO: Deleting all statefulset in ns statefulset-8124
Jan 28 22:11:29.443: INFO: Scaling statefulset ss to 0
Jan 28 22:11:49.605: INFO: Waiting for statefulset status.replicas updated to 0
Jan 28 22:11:49.611: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:11:49.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-8124" for this suite.

• [SLOW TEST:30.455 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should have a working scale subresource [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":138,"skipped":2282,"failed":0}
SSSS
------------------------------
[sig-apps] Job 
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:11:49.695: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:12:19.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-2883" for this suite.

• [SLOW TEST:30.131 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":139,"skipped":2286,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] Security Context When creating a container with runAsUser 
  should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:12:19.828: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 28 22:12:20.118: INFO: Waiting up to 5m0s for pod "busybox-user-65534-656c7415-8e34-42ab-9855-3adeee89f4c1" in namespace "security-context-test-9125" to be "success or failure"
Jan 28 22:12:20.135: INFO: Pod "busybox-user-65534-656c7415-8e34-42ab-9855-3adeee89f4c1": Phase="Pending", Reason="", readiness=false. Elapsed: 17.00311ms
Jan 28 22:12:22.148: INFO: Pod "busybox-user-65534-656c7415-8e34-42ab-9855-3adeee89f4c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030135029s
Jan 28 22:12:24.165: INFO: Pod "busybox-user-65534-656c7415-8e34-42ab-9855-3adeee89f4c1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046214279s
Jan 28 22:12:26.173: INFO: Pod "busybox-user-65534-656c7415-8e34-42ab-9855-3adeee89f4c1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054414043s
Jan 28 22:12:28.179: INFO: Pod "busybox-user-65534-656c7415-8e34-42ab-9855-3adeee89f4c1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.060673952s
Jan 28 22:12:30.189: INFO: Pod "busybox-user-65534-656c7415-8e34-42ab-9855-3adeee89f4c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.07073515s
Jan 28 22:12:30.189: INFO: Pod "busybox-user-65534-656c7415-8e34-42ab-9855-3adeee89f4c1" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:12:30.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-9125" for this suite.

• [SLOW TEST:10.380 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  When creating a container with runAsUser
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:43
    should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":140,"skipped":2294,"failed":0}
SS
------------------------------
[sig-cli] Kubectl client Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:12:30.210: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 28 22:12:30.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2730'
Jan 28 22:12:31.119: INFO: stderr: ""
Jan 28 22:12:31.120: INFO: stdout: "replicationcontroller/agnhost-master created\n"
Jan 28 22:12:31.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2730'
Jan 28 22:12:31.606: INFO: stderr: ""
Jan 28 22:12:31.607: INFO: stdout: "service/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Jan 28 22:12:32.613: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 28 22:12:32.613: INFO: Found 0 / 1
Jan 28 22:12:33.618: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 28 22:12:33.618: INFO: Found 0 / 1
Jan 28 22:12:34.618: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 28 22:12:34.618: INFO: Found 0 / 1
Jan 28 22:12:35.630: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 28 22:12:35.633: INFO: Found 0 / 1
Jan 28 22:12:36.612: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 28 22:12:36.612: INFO: Found 0 / 1
Jan 28 22:12:37.612: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 28 22:12:37.612: INFO: Found 0 / 1
Jan 28 22:12:38.621: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 28 22:12:38.622: INFO: Found 0 / 1
Jan 28 22:12:39.613: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 28 22:12:39.613: INFO: Found 1 / 1
Jan 28 22:12:39.613: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan 28 22:12:39.616: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 28 22:12:39.616: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan 28 22:12:39.617: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-v4mg6 --namespace=kubectl-2730'
Jan 28 22:12:39.759: INFO: stderr: ""
Jan 28 22:12:39.759: INFO: stdout: "Name:         agnhost-master-v4mg6\nNamespace:    kubectl-2730\nPriority:     0\nNode:         jerma-node/10.96.2.250\nStart Time:   Tue, 28 Jan 2020 22:12:31 +0000\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nStatus:       Running\nIP:           10.44.0.1\nIPs:\n  IP:           10.44.0.1\nControlled By:  ReplicationController/agnhost-master\nContainers:\n  agnhost-master:\n    Container ID:   docker://b0674fca7b50064936e2843eff312995c4e9fdfc72c8f30599d7e0870e928e63\n    Image:          gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n    Image ID:       docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Tue, 28 Jan 2020 22:12:37 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-885qk (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-885qk:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-885qk\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age        From                 Message\n  ----    ------     ----       ----                 -------\n  Normal  Scheduled    default-scheduler    Successfully assigned kubectl-2730/agnhost-master-v4mg6 to jerma-node\n  Normal  Pulled     5s         kubelet, jerma-node  Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n  Normal  Created    2s         kubelet, jerma-node  Created container agnhost-master\n  Normal  Started    2s         kubelet, jerma-node  Started container agnhost-master\n"
Jan 28 22:12:39.760: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-2730'
Jan 28 22:12:39.882: INFO: stderr: ""
Jan 28 22:12:39.882: INFO: stdout: "Name:         agnhost-master\nNamespace:    kubectl-2730\nSelector:     app=agnhost,role=master\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=master\n  Containers:\n   agnhost-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  8s    replication-controller  Created pod: agnhost-master-v4mg6\n"
Jan 28 22:12:39.882: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-2730'
Jan 28 22:12:40.053: INFO: stderr: ""
Jan 28 22:12:40.053: INFO: stdout: "Name:              agnhost-master\nNamespace:         kubectl-2730\nLabels:            app=agnhost\n                   role=master\nAnnotations:       \nSelector:          app=agnhost,role=master\nType:              ClusterIP\nIP:                10.96.92.139\nPort:                6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         10.44.0.1:6379\nSession Affinity:  None\nEvents:            \n"
Jan 28 22:12:40.056: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-node'
Jan 28 22:12:40.234: INFO: stderr: ""
Jan 28 22:12:40.234: INFO: stdout: "Name:               jerma-node\nRoles:              \nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=jerma-node\n                    kubernetes.io/os=linux\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sat, 04 Jan 2020 11:59:52 +0000\nTaints:             \nUnschedulable:      false\nLease:\n  HolderIdentity:  jerma-node\n  AcquireTime:     \n  RenewTime:       Tue, 28 Jan 2020 22:12:34 +0000\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Sat, 04 Jan 2020 12:00:49 +0000   Sat, 04 Jan 2020 12:00:49 +0000   WeaveIsUp                    Weave pod has set this\n  MemoryPressure       False   Tue, 28 Jan 2020 22:07:46 +0000   Sat, 04 Jan 2020 11:59:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Tue, 28 Jan 2020 22:07:46 +0000   Sat, 04 Jan 2020 11:59:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Tue, 28 Jan 2020 22:07:46 +0000   Sat, 04 Jan 2020 11:59:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Tue, 28 Jan 2020 22:07:46 +0000   Sat, 04 Jan 2020 12:00:52 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:  10.96.2.250\n  Hostname:    jerma-node\nCapacity:\n  cpu:                4\n  ephemeral-storage:  20145724Ki\n  hugepages-2Mi:      0\n  memory:             4039076Ki\n  pods:               110\nAllocatable:\n  cpu:                4\n  ephemeral-storage:  18566299208\n  hugepages-2Mi:      0\n  memory:             3936676Ki\n  pods:               110\nSystem Info:\n  Machine ID:                 bdc16344252549dd902c3a5d68b22f41\n  System UUID:                BDC16344-2525-49DD-902C-3A5D68B22F41\n  Boot ID:                    eec61fc4-8bf6-487f-8f93-ea9731fe757a\n  Kernel Version:             4.15.0-52-generic\n  OS Image:                   Ubuntu 18.04.2 LTS\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  docker://18.9.7\n  Kubelet Version:            v1.17.0\n  Kube-Proxy Version:         v1.17.0\nNon-terminated Pods:          (3 in total)\n  Namespace                   Name                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                   ----                    ------------  ----------  ---------------  -------------  ---\n  kube-system                 kube-proxy-dsf66        0 (0%)        0 (0%)      0 (0%)           0 (0%)         24d\n  kube-system                 weave-net-kz8lv         20m (0%)      0 (0%)      0 (0%)           0 (0%)         24d\n  kubectl-2730                agnhost-master-v4mg6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests  Limits\n  --------           --------  ------\n  cpu                20m (0%)  0 (0%)\n  memory             0 (0%)    0 (0%)\n  ephemeral-storage  0 (0%)    0 (0%)\nEvents:              \n"
Jan 28 22:12:40.234: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-2730'
Jan 28 22:12:40.381: INFO: stderr: ""
Jan 28 22:12:40.382: INFO: stdout: "Name:         kubectl-2730\nLabels:       e2e-framework=kubectl\n              e2e-run=452199e7-bf93-4c7a-b9a7-9962d737460b\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo LimitRange resource.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:12:40.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2730" for this suite.

• [SLOW TEST:10.201 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":278,"completed":141,"skipped":2296,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:12:40.412: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override all
Jan 28 22:12:40.489: INFO: Waiting up to 5m0s for pod "client-containers-ce8f4448-b035-4f32-a0f2-661b08b934b4" in namespace "containers-5836" to be "success or failure"
Jan 28 22:12:40.497: INFO: Pod "client-containers-ce8f4448-b035-4f32-a0f2-661b08b934b4": Phase="Pending", Reason="", readiness=false. Elapsed: 7.627186ms
Jan 28 22:12:42.538: INFO: Pod "client-containers-ce8f4448-b035-4f32-a0f2-661b08b934b4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049042269s
Jan 28 22:12:44.564: INFO: Pod "client-containers-ce8f4448-b035-4f32-a0f2-661b08b934b4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07514711s
Jan 28 22:12:46.608: INFO: Pod "client-containers-ce8f4448-b035-4f32-a0f2-661b08b934b4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.118879807s
Jan 28 22:12:48.615: INFO: Pod "client-containers-ce8f4448-b035-4f32-a0f2-661b08b934b4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.125866389s
Jan 28 22:12:50.627: INFO: Pod "client-containers-ce8f4448-b035-4f32-a0f2-661b08b934b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.137977993s
STEP: Saw pod success
Jan 28 22:12:50.627: INFO: Pod "client-containers-ce8f4448-b035-4f32-a0f2-661b08b934b4" satisfied condition "success or failure"
Jan 28 22:12:50.634: INFO: Trying to get logs from node jerma-node pod client-containers-ce8f4448-b035-4f32-a0f2-661b08b934b4 container test-container: 
STEP: delete the pod
Jan 28 22:12:50.718: INFO: Waiting for pod client-containers-ce8f4448-b035-4f32-a0f2-661b08b934b4 to disappear
Jan 28 22:12:50.728: INFO: Pod client-containers-ce8f4448-b035-4f32-a0f2-661b08b934b4 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:12:50.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-5836" for this suite.

• [SLOW TEST:10.488 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":142,"skipped":2319,"failed":0}
SSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:12:50.901: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 28 22:12:51.100: INFO: Create a RollingUpdate DaemonSet
Jan 28 22:12:51.107: INFO: Check that daemon pods launch on every node of the cluster
Jan 28 22:12:51.158: INFO: Number of nodes with available pods: 0
Jan 28 22:12:51.159: INFO: Node jerma-node is running more than one daemon pod
Jan 28 22:12:52.855: INFO: Number of nodes with available pods: 0
Jan 28 22:12:52.855: INFO: Node jerma-node is running more than one daemon pod
Jan 28 22:12:53.383: INFO: Number of nodes with available pods: 0
Jan 28 22:12:53.383: INFO: Node jerma-node is running more than one daemon pod
Jan 28 22:12:54.332: INFO: Number of nodes with available pods: 0
Jan 28 22:12:54.333: INFO: Node jerma-node is running more than one daemon pod
Jan 28 22:12:55.183: INFO: Number of nodes with available pods: 0
Jan 28 22:12:55.183: INFO: Node jerma-node is running more than one daemon pod
Jan 28 22:12:56.174: INFO: Number of nodes with available pods: 0
Jan 28 22:12:56.174: INFO: Node jerma-node is running more than one daemon pod
Jan 28 22:12:57.958: INFO: Number of nodes with available pods: 0
Jan 28 22:12:57.958: INFO: Node jerma-node is running more than one daemon pod
Jan 28 22:12:58.444: INFO: Number of nodes with available pods: 0
Jan 28 22:12:58.444: INFO: Node jerma-node is running more than one daemon pod
Jan 28 22:12:59.315: INFO: Number of nodes with available pods: 0
Jan 28 22:12:59.315: INFO: Node jerma-node is running more than one daemon pod
Jan 28 22:13:00.209: INFO: Number of nodes with available pods: 0
Jan 28 22:13:00.209: INFO: Node jerma-node is running more than one daemon pod
Jan 28 22:13:01.800: INFO: Number of nodes with available pods: 2
Jan 28 22:13:01.800: INFO: Number of running nodes: 2, number of available pods: 2
Jan 28 22:13:01.800: INFO: Update the DaemonSet to trigger a rollout
Jan 28 22:13:01.888: INFO: Updating DaemonSet daemon-set
Jan 28 22:13:13.915: INFO: Roll back the DaemonSet before rollout is complete
Jan 28 22:13:13.940: INFO: Updating DaemonSet daemon-set
Jan 28 22:13:13.941: INFO: Make sure DaemonSet rollback is complete
Jan 28 22:13:14.122: INFO: Wrong image for pod: daemon-set-n9ds2. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jan 28 22:13:14.123: INFO: Pod daemon-set-n9ds2 is not available
Jan 28 22:13:15.400: INFO: Wrong image for pod: daemon-set-n9ds2. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jan 28 22:13:15.400: INFO: Pod daemon-set-n9ds2 is not available
Jan 28 22:13:16.404: INFO: Wrong image for pod: daemon-set-n9ds2. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jan 28 22:13:16.404: INFO: Pod daemon-set-n9ds2 is not available
Jan 28 22:13:17.752: INFO: Wrong image for pod: daemon-set-n9ds2. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jan 28 22:13:17.753: INFO: Pod daemon-set-n9ds2 is not available
Jan 28 22:13:18.399: INFO: Pod daemon-set-jw97z is not available
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3316, will wait for the garbage collector to delete the pods
Jan 28 22:13:18.500: INFO: Deleting DaemonSet.extensions daemon-set took: 15.479866ms
Jan 28 22:13:19.701: INFO: Terminating DaemonSet.extensions daemon-set pods took: 1.201261992s
Jan 28 22:13:26.612: INFO: Number of nodes with available pods: 0
Jan 28 22:13:26.612: INFO: Number of running nodes: 0, number of available pods: 0
Jan 28 22:13:26.617: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3316/daemonsets","resourceVersion":"4972737"},"items":null}

Jan 28 22:13:26.621: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3316/pods","resourceVersion":"4972737"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:13:26.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-3316" for this suite.

• [SLOW TEST:35.891 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":143,"skipped":2322,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:13:26.792: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-upd-94511134-201b-4a31-acf5-184896f6ef3e
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:13:37.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-212" for this suite.

• [SLOW TEST:10.295 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":144,"skipped":2331,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:13:37.089: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Jan 28 22:13:46.000: INFO: Successfully updated pod "labelsupdateae218bc8-e0d1-42e7-86e7-88da8ebdb6f5"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:13:48.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-147" for this suite.

• [SLOW TEST:10.991 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":145,"skipped":2353,"failed":0}
SSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:13:48.080: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:329
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a replication controller
Jan 28 22:13:48.195: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5668'
Jan 28 22:13:50.226: INFO: stderr: ""
Jan 28 22:13:50.226: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 28 22:13:50.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5668'
Jan 28 22:13:50.383: INFO: stderr: ""
Jan 28 22:13:50.383: INFO: stdout: "update-demo-nautilus-8wft6 update-demo-nautilus-mpx6c "
Jan 28 22:13:50.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8wft6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5668'
Jan 28 22:13:50.575: INFO: stderr: ""
Jan 28 22:13:50.576: INFO: stdout: ""
Jan 28 22:13:50.576: INFO: update-demo-nautilus-8wft6 is created but not running
Jan 28 22:13:55.576: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5668'
Jan 28 22:13:56.166: INFO: stderr: ""
Jan 28 22:13:56.166: INFO: stdout: "update-demo-nautilus-8wft6 update-demo-nautilus-mpx6c "
Jan 28 22:13:56.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8wft6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5668'
Jan 28 22:13:56.362: INFO: stderr: ""
Jan 28 22:13:56.362: INFO: stdout: ""
Jan 28 22:13:56.362: INFO: update-demo-nautilus-8wft6 is created but not running
Jan 28 22:14:01.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5668'
Jan 28 22:14:01.545: INFO: stderr: ""
Jan 28 22:14:01.545: INFO: stdout: "update-demo-nautilus-8wft6 update-demo-nautilus-mpx6c "
Jan 28 22:14:01.545: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8wft6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5668'
Jan 28 22:14:01.683: INFO: stderr: ""
Jan 28 22:14:01.683: INFO: stdout: "true"
Jan 28 22:14:01.683: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8wft6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5668'
Jan 28 22:14:01.765: INFO: stderr: ""
Jan 28 22:14:01.765: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 28 22:14:01.765: INFO: validating pod update-demo-nautilus-8wft6
Jan 28 22:14:01.772: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 28 22:14:01.772: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 28 22:14:01.772: INFO: update-demo-nautilus-8wft6 is verified up and running
Jan 28 22:14:01.773: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mpx6c -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5668'
Jan 28 22:14:01.912: INFO: stderr: ""
Jan 28 22:14:01.912: INFO: stdout: "true"
Jan 28 22:14:01.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mpx6c -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5668'
Jan 28 22:14:02.075: INFO: stderr: ""
Jan 28 22:14:02.075: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 28 22:14:02.075: INFO: validating pod update-demo-nautilus-mpx6c
Jan 28 22:14:02.108: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 28 22:14:02.108: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 28 22:14:02.108: INFO: update-demo-nautilus-mpx6c is verified up and running
STEP: using delete to clean up resources
Jan 28 22:14:02.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5668'
Jan 28 22:14:02.301: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 28 22:14:02.301: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan 28 22:14:02.301: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5668'
Jan 28 22:14:02.602: INFO: stderr: "No resources found in kubectl-5668 namespace.\n"
Jan 28 22:14:02.603: INFO: stdout: ""
Jan 28 22:14:02.604: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5668 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 28 22:14:02.969: INFO: stderr: ""
Jan 28 22:14:02.969: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:14:02.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5668" for this suite.

• [SLOW TEST:14.899 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:327
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":278,"completed":146,"skipped":2356,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  listing custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:14:02.980: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] listing custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 28 22:14:03.960: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:14:07.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-3377" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":278,"completed":147,"skipped":2368,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:14:07.310: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Jan 28 22:14:08.416: INFO: Pod name wrapped-volume-race-586b4f21-1f7f-4725-8c45-2d394403f67a: Found 0 pods out of 5
Jan 28 22:14:13.424: INFO: Pod name wrapped-volume-race-586b4f21-1f7f-4725-8c45-2d394403f67a: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-586b4f21-1f7f-4725-8c45-2d394403f67a in namespace emptydir-wrapper-636, will wait for the garbage collector to delete the pods
Jan 28 22:14:43.553: INFO: Deleting ReplicationController wrapped-volume-race-586b4f21-1f7f-4725-8c45-2d394403f67a took: 26.490187ms
Jan 28 22:14:44.255: INFO: Terminating ReplicationController wrapped-volume-race-586b4f21-1f7f-4725-8c45-2d394403f67a pods took: 702.370133ms
STEP: Creating RC which spawns configmap-volume pods
Jan 28 22:15:03.284: INFO: Pod name wrapped-volume-race-c2ff6db4-478b-4bfb-b985-6fe439212d8b: Found 0 pods out of 5
Jan 28 22:15:08.302: INFO: Pod name wrapped-volume-race-c2ff6db4-478b-4bfb-b985-6fe439212d8b: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-c2ff6db4-478b-4bfb-b985-6fe439212d8b in namespace emptydir-wrapper-636, will wait for the garbage collector to delete the pods
Jan 28 22:15:34.428: INFO: Deleting ReplicationController wrapped-volume-race-c2ff6db4-478b-4bfb-b985-6fe439212d8b took: 36.896207ms
Jan 28 22:15:34.829: INFO: Terminating ReplicationController wrapped-volume-race-c2ff6db4-478b-4bfb-b985-6fe439212d8b pods took: 400.390433ms
STEP: Creating RC which spawns configmap-volume pods
Jan 28 22:15:53.599: INFO: Pod name wrapped-volume-race-02f81641-2f75-4851-a447-e23b55b66280: Found 0 pods out of 5
Jan 28 22:15:58.608: INFO: Pod name wrapped-volume-race-02f81641-2f75-4851-a447-e23b55b66280: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-02f81641-2f75-4851-a447-e23b55b66280 in namespace emptydir-wrapper-636, will wait for the garbage collector to delete the pods
Jan 28 22:16:24.718: INFO: Deleting ReplicationController wrapped-volume-race-02f81641-2f75-4851-a447-e23b55b66280 took: 9.939258ms
Jan 28 22:16:25.119: INFO: Terminating ReplicationController wrapped-volume-race-02f81641-2f75-4851-a447-e23b55b66280 pods took: 400.551461ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:16:44.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-636" for this suite.

• [SLOW TEST:157.458 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":148,"skipped":2383,"failed":0}
SSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:16:44.771: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 28 22:16:44.867: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:16:53.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5672" for this suite.

• [SLOW TEST:8.417 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":149,"skipped":2393,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:16:53.188: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Jan 28 22:17:02.338: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:17:03.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-1132" for this suite.

• [SLOW TEST:10.241 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":150,"skipped":2409,"failed":0}
SSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:17:03.430: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5986.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-5986.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5986.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5986.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-5986.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-5986.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-5986.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-5986.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5986.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5986.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-5986.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5986.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-5986.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-5986.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-5986.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-5986.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-5986.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5986.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 28 22:17:21.634: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5986.svc.cluster.local from pod dns-5986/dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2: the server could not find the requested resource (get pods dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2)
Jan 28 22:17:21.655: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5986.svc.cluster.local from pod dns-5986/dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2: the server could not find the requested resource (get pods dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2)
Jan 28 22:17:21.661: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5986.svc.cluster.local from pod dns-5986/dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2: the server could not find the requested resource (get pods dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2)
Jan 28 22:17:21.667: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5986.svc.cluster.local from pod dns-5986/dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2: the server could not find the requested resource (get pods dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2)
Jan 28 22:17:21.683: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5986.svc.cluster.local from pod dns-5986/dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2: the server could not find the requested resource (get pods dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2)
Jan 28 22:17:21.687: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5986.svc.cluster.local from pod dns-5986/dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2: the server could not find the requested resource (get pods dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2)
Jan 28 22:17:21.691: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5986.svc.cluster.local from pod dns-5986/dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2: the server could not find the requested resource (get pods dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2)
Jan 28 22:17:21.695: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5986.svc.cluster.local from pod dns-5986/dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2: the server could not find the requested resource (get pods dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2)
Jan 28 22:17:21.702: INFO: Lookups using dns-5986/dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5986.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5986.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5986.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5986.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5986.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5986.svc.cluster.local jessie_udp@dns-test-service-2.dns-5986.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5986.svc.cluster.local]

Jan 28 22:17:26.759: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5986.svc.cluster.local from pod dns-5986/dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2: the server could not find the requested resource (get pods dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2)
Jan 28 22:17:26.772: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5986.svc.cluster.local from pod dns-5986/dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2: the server could not find the requested resource (get pods dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2)
Jan 28 22:17:26.779: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5986.svc.cluster.local from pod dns-5986/dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2: the server could not find the requested resource (get pods dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2)
Jan 28 22:17:26.787: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5986.svc.cluster.local from pod dns-5986/dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2: the server could not find the requested resource (get pods dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2)
Jan 28 22:17:26.807: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5986.svc.cluster.local from pod dns-5986/dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2: the server could not find the requested resource (get pods dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2)
Jan 28 22:17:26.816: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5986.svc.cluster.local from pod dns-5986/dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2: the server could not find the requested resource (get pods dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2)
Jan 28 22:17:26.823: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5986.svc.cluster.local from pod dns-5986/dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2: the server could not find the requested resource (get pods dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2)
Jan 28 22:17:26.831: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5986.svc.cluster.local from pod dns-5986/dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2: the server could not find the requested resource (get pods dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2)
Jan 28 22:17:26.841: INFO: Lookups using dns-5986/dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5986.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5986.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5986.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5986.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5986.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5986.svc.cluster.local jessie_udp@dns-test-service-2.dns-5986.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5986.svc.cluster.local]

Jan 28 22:17:31.711: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5986.svc.cluster.local from pod dns-5986/dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2: the server could not find the requested resource (get pods dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2)
Jan 28 22:17:31.717: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5986.svc.cluster.local from pod dns-5986/dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2: the server could not find the requested resource (get pods dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2)
Jan 28 22:17:31.722: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5986.svc.cluster.local from pod dns-5986/dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2: the server could not find the requested resource (get pods dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2)
Jan 28 22:17:31.730: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5986.svc.cluster.local from pod dns-5986/dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2: the server could not find the requested resource (get pods dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2)
Jan 28 22:17:31.750: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5986.svc.cluster.local from pod dns-5986/dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2: the server could not find the requested resource (get pods dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2)
Jan 28 22:17:31.760: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5986.svc.cluster.local from pod dns-5986/dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2: the server could not find the requested resource (get pods dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2)
Jan 28 22:17:31.766: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5986.svc.cluster.local from pod dns-5986/dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2: the server could not find the requested resource (get pods dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2)
Jan 28 22:17:31.772: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5986.svc.cluster.local from pod dns-5986/dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2: the server could not find the requested resource (get pods dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2)
Jan 28 22:17:31.783: INFO: Lookups using dns-5986/dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5986.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5986.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5986.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5986.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5986.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5986.svc.cluster.local jessie_udp@dns-test-service-2.dns-5986.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5986.svc.cluster.local]

Jan 28 22:17:36.716: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5986.svc.cluster.local from pod dns-5986/dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2: the server could not find the requested resource (get pods dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2)
Jan 28 22:17:36.722: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5986.svc.cluster.local from pod dns-5986/dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2: the server could not find the requested resource (get pods dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2)
Jan 28 22:17:36.727: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5986.svc.cluster.local from pod dns-5986/dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2: the server could not find the requested resource (get pods dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2)
Jan 28 22:17:36.732: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5986.svc.cluster.local from pod dns-5986/dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2: the server could not find the requested resource (get pods dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2)
Jan 28 22:17:36.746: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5986.svc.cluster.local from pod dns-5986/dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2: the server could not find the requested resource (get pods dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2)
Jan 28 22:17:36.751: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5986.svc.cluster.local from pod dns-5986/dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2: the server could not find the requested resource (get pods dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2)
Jan 28 22:17:36.756: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5986.svc.cluster.local from pod dns-5986/dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2: the server could not find the requested resource (get pods dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2)
Jan 28 22:17:36.761: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5986.svc.cluster.local from pod dns-5986/dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2: the server could not find the requested resource (get pods dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2)
Jan 28 22:17:36.770: INFO: Lookups using dns-5986/dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5986.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5986.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5986.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5986.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5986.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5986.svc.cluster.local jessie_udp@dns-test-service-2.dns-5986.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5986.svc.cluster.local]

Jan 28 22:17:41.708: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5986.svc.cluster.local from pod dns-5986/dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2: the server could not find the requested resource (get pods dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2)
Jan 28 22:17:41.712: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5986.svc.cluster.local from pod dns-5986/dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2: the server could not find the requested resource (get pods dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2)
Jan 28 22:17:41.715: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5986.svc.cluster.local from pod dns-5986/dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2: the server could not find the requested resource (get pods dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2)
Jan 28 22:17:41.719: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5986.svc.cluster.local from pod dns-5986/dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2: the server could not find the requested resource (get pods dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2)
Jan 28 22:17:41.731: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5986.svc.cluster.local from pod dns-5986/dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2: the server could not find the requested resource (get pods dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2)
Jan 28 22:17:41.734: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5986.svc.cluster.local from pod dns-5986/dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2: the server could not find the requested resource (get pods dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2)
Jan 28 22:17:41.737: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5986.svc.cluster.local from pod dns-5986/dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2: the server could not find the requested resource (get pods dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2)
Jan 28 22:17:41.740: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5986.svc.cluster.local from pod dns-5986/dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2: the server could not find the requested resource (get pods dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2)
Jan 28 22:17:41.747: INFO: Lookups using dns-5986/dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5986.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5986.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5986.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5986.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5986.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5986.svc.cluster.local jessie_udp@dns-test-service-2.dns-5986.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5986.svc.cluster.local]

Jan 28 22:17:46.710: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5986.svc.cluster.local from pod dns-5986/dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2: the server could not find the requested resource (get pods dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2)
Jan 28 22:17:46.714: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5986.svc.cluster.local from pod dns-5986/dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2: the server could not find the requested resource (get pods dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2)
Jan 28 22:17:46.718: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5986.svc.cluster.local from pod dns-5986/dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2: the server could not find the requested resource (get pods dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2)
Jan 28 22:17:46.722: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5986.svc.cluster.local from pod dns-5986/dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2: the server could not find the requested resource (get pods dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2)
Jan 28 22:17:46.735: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5986.svc.cluster.local from pod dns-5986/dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2: the server could not find the requested resource (get pods dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2)
Jan 28 22:17:46.740: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5986.svc.cluster.local from pod dns-5986/dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2: the server could not find the requested resource (get pods dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2)
Jan 28 22:17:46.744: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5986.svc.cluster.local from pod dns-5986/dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2: the server could not find the requested resource (get pods dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2)
Jan 28 22:17:46.749: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5986.svc.cluster.local from pod dns-5986/dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2: the server could not find the requested resource (get pods dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2)
Jan 28 22:17:46.760: INFO: Lookups using dns-5986/dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5986.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5986.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5986.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5986.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5986.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5986.svc.cluster.local jessie_udp@dns-test-service-2.dns-5986.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5986.svc.cluster.local]

Jan 28 22:17:51.812: INFO: DNS probes using dns-5986/dns-test-7e5f787e-f665-4ea4-8cce-20090817e4a2 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:17:51.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5986" for this suite.

• [SLOW TEST:48.599 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":151,"skipped":2413,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:17:52.029: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 28 22:17:52.329: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"bd618634-e6e7-466f-912a-8dd6f4ea5b2f", Controller:(*bool)(0xc004897a4a), BlockOwnerDeletion:(*bool)(0xc004897a4b)}}
Jan 28 22:17:52.347: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"3b51384d-b3f2-49d9-9a83-2a41ccc141f3", Controller:(*bool)(0xc005b38eea), BlockOwnerDeletion:(*bool)(0xc005b38eeb)}}
Jan 28 22:17:52.375: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"49643ad3-d2c6-479b-bce8-d53ed588be48", Controller:(*bool)(0xc004897c0a), BlockOwnerDeletion:(*bool)(0xc004897c0b)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:17:57.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6863" for this suite.

• [SLOW TEST:5.504 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":152,"skipped":2419,"failed":0}
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:17:57.534: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 28 22:17:58.046: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 28 22:18:00.060: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715846678, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715846678, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715846678, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715846678, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 22:18:02.067: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715846678, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715846678, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715846678, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715846678, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 22:18:04.067: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715846678, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715846678, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715846678, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715846678, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 22:18:06.065: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715846678, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715846678, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715846678, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715846678, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 22:18:08.079: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715846678, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715846678, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715846678, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715846678, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 28 22:18:11.188: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:18:11.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-430" for this suite.
STEP: Destroying namespace "webhook-430-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:14.022 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":153,"skipped":2419,"failed":0}
SSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:18:11.558: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:18:23.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-9618" for this suite.

• [SLOW TEST:12.262 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":154,"skipped":2423,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch 
  watch on custom resource definition objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:18:23.822: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] watch on custom resource definition objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 28 22:18:24.038: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating first CR 
Jan 28 22:18:24.716: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-28T22:18:24Z generation:1 name:name1 resourceVersion:4974642 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:58dcffbd-70a2-4a17-bcee-e24c6ef517df] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Creating second CR
Jan 28 22:18:34.736: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-28T22:18:34Z generation:1 name:name2 resourceVersion:4974677 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:10db7788-b2db-428e-8a73-8489472a786b] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying first CR
Jan 28 22:18:44.747: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-28T22:18:24Z generation:2 name:name1 resourceVersion:4974701 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:58dcffbd-70a2-4a17-bcee-e24c6ef517df] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying second CR
Jan 28 22:18:54.755: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-28T22:18:34Z generation:2 name:name2 resourceVersion:4974725 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:10db7788-b2db-428e-8a73-8489472a786b] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting first CR
Jan 28 22:19:04.769: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-28T22:18:24Z generation:2 name:name1 resourceVersion:4974749 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:58dcffbd-70a2-4a17-bcee-e24c6ef517df] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting second CR
Jan 28 22:19:14.866: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-28T22:18:34Z generation:2 name:name2 resourceVersion:4974773 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:10db7788-b2db-428e-8a73-8489472a786b] num:map[num1:9223372036854775807 num2:1000000]]}
[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:19:25.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-watch-7709" for this suite.

• [SLOW TEST:61.574 seconds]
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  CustomResourceDefinition Watch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41
    watch on custom resource definition objects [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":155,"skipped":2436,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:19:25.396: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan 28 22:19:25.549: INFO: Waiting up to 5m0s for pod "pod-cd1f18c1-18ec-4c98-95ee-e51396ac59d5" in namespace "emptydir-2667" to be "success or failure"
Jan 28 22:19:25.566: INFO: Pod "pod-cd1f18c1-18ec-4c98-95ee-e51396ac59d5": Phase="Pending", Reason="", readiness=false. Elapsed: 15.90205ms
Jan 28 22:19:27.578: INFO: Pod "pod-cd1f18c1-18ec-4c98-95ee-e51396ac59d5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027959728s
Jan 28 22:19:29.589: INFO: Pod "pod-cd1f18c1-18ec-4c98-95ee-e51396ac59d5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039317431s
Jan 28 22:19:31.598: INFO: Pod "pod-cd1f18c1-18ec-4c98-95ee-e51396ac59d5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04816142s
Jan 28 22:19:33.612: INFO: Pod "pod-cd1f18c1-18ec-4c98-95ee-e51396ac59d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.062574949s
STEP: Saw pod success
Jan 28 22:19:33.613: INFO: Pod "pod-cd1f18c1-18ec-4c98-95ee-e51396ac59d5" satisfied condition "success or failure"
Jan 28 22:19:33.616: INFO: Trying to get logs from node jerma-node pod pod-cd1f18c1-18ec-4c98-95ee-e51396ac59d5 container test-container: 
STEP: delete the pod
Jan 28 22:19:33.677: INFO: Waiting for pod pod-cd1f18c1-18ec-4c98-95ee-e51396ac59d5 to disappear
Jan 28 22:19:33.782: INFO: Pod pod-cd1f18c1-18ec-4c98-95ee-e51396ac59d5 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:19:33.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2667" for this suite.

• [SLOW TEST:8.410 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":156,"skipped":2451,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:19:33.808: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 28 22:19:33.981: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e6525218-a503-4fa1-a496-1fb8df165191" in namespace "projected-3717" to be "success or failure"
Jan 28 22:19:34.008: INFO: Pod "downwardapi-volume-e6525218-a503-4fa1-a496-1fb8df165191": Phase="Pending", Reason="", readiness=false. Elapsed: 26.114557ms
Jan 28 22:19:36.017: INFO: Pod "downwardapi-volume-e6525218-a503-4fa1-a496-1fb8df165191": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035619898s
Jan 28 22:19:38.022: INFO: Pod "downwardapi-volume-e6525218-a503-4fa1-a496-1fb8df165191": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039944073s
Jan 28 22:19:40.026: INFO: Pod "downwardapi-volume-e6525218-a503-4fa1-a496-1fb8df165191": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044460623s
Jan 28 22:19:42.213: INFO: Pod "downwardapi-volume-e6525218-a503-4fa1-a496-1fb8df165191": Phase="Pending", Reason="", readiness=false. Elapsed: 8.231407366s
Jan 28 22:19:44.224: INFO: Pod "downwardapi-volume-e6525218-a503-4fa1-a496-1fb8df165191": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.242528404s
STEP: Saw pod success
Jan 28 22:19:44.225: INFO: Pod "downwardapi-volume-e6525218-a503-4fa1-a496-1fb8df165191" satisfied condition "success or failure"
Jan 28 22:19:44.229: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-e6525218-a503-4fa1-a496-1fb8df165191 container client-container: 
STEP: delete the pod
Jan 28 22:19:44.301: INFO: Waiting for pod downwardapi-volume-e6525218-a503-4fa1-a496-1fb8df165191 to disappear
Jan 28 22:19:44.306: INFO: Pod downwardapi-volume-e6525218-a503-4fa1-a496-1fb8df165191 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:19:44.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3717" for this suite.

• [SLOW TEST:10.510 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":157,"skipped":2464,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:19:44.319: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test substitution in container's args
Jan 28 22:19:44.380: INFO: Waiting up to 5m0s for pod "var-expansion-7ed4e453-cf18-4a68-a3c4-88d522d6c866" in namespace "var-expansion-346" to be "success or failure"
Jan 28 22:19:44.386: INFO: Pod "var-expansion-7ed4e453-cf18-4a68-a3c4-88d522d6c866": Phase="Pending", Reason="", readiness=false. Elapsed: 5.745546ms
Jan 28 22:19:46.401: INFO: Pod "var-expansion-7ed4e453-cf18-4a68-a3c4-88d522d6c866": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021463657s
Jan 28 22:19:48.415: INFO: Pod "var-expansion-7ed4e453-cf18-4a68-a3c4-88d522d6c866": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03479323s
Jan 28 22:19:50.424: INFO: Pod "var-expansion-7ed4e453-cf18-4a68-a3c4-88d522d6c866": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044651856s
Jan 28 22:19:52.435: INFO: Pod "var-expansion-7ed4e453-cf18-4a68-a3c4-88d522d6c866": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.055351681s
STEP: Saw pod success
Jan 28 22:19:52.435: INFO: Pod "var-expansion-7ed4e453-cf18-4a68-a3c4-88d522d6c866" satisfied condition "success or failure"
Jan 28 22:19:52.440: INFO: Trying to get logs from node jerma-node pod var-expansion-7ed4e453-cf18-4a68-a3c4-88d522d6c866 container dapi-container: 
STEP: delete the pod
Jan 28 22:19:52.493: INFO: Waiting for pod var-expansion-7ed4e453-cf18-4a68-a3c4-88d522d6c866 to disappear
Jan 28 22:19:52.511: INFO: Pod var-expansion-7ed4e453-cf18-4a68-a3c4-88d522d6c866 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:19:52.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-346" for this suite.

• [SLOW TEST:8.205 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":158,"skipped":2480,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:19:52.528: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1768
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jan 28 22:19:52.662: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-5281'
Jan 28 22:19:52.805: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 28 22:19:52.805: INFO: stdout: "job.batch/e2e-test-httpd-job created\n"
STEP: verifying the job e2e-test-httpd-job was created
[AfterEach] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1773
Jan 28 22:19:52.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-5281'
Jan 28 22:19:53.046: INFO: stderr: ""
Jan 28 22:19:53.046: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:19:53.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5281" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure  [Conformance]","total":278,"completed":159,"skipped":2522,"failed":0}
SSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:19:53.060: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Jan 28 22:19:53.218: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 28 22:19:53.265: INFO: Waiting for terminating namespaces to be deleted...
Jan 28 22:19:53.268: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Jan 28 22:19:53.278: INFO: e2e-test-httpd-job-2p5ml from kubectl-5281 started at  (0 container statuses recorded)
Jan 28 22:19:53.278: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Jan 28 22:19:53.278: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 28 22:19:53.278: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Jan 28 22:19:53.278: INFO: 	Container weave ready: true, restart count 1
Jan 28 22:19:53.278: INFO: 	Container weave-npc ready: true, restart count 0
Jan 28 22:19:53.278: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Jan 28 22:19:53.308: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 28 22:19:53.308: INFO: 	Container coredns ready: true, restart count 0
Jan 28 22:19:53.308: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 28 22:19:53.308: INFO: 	Container coredns ready: true, restart count 0
Jan 28 22:19:53.308: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 28 22:19:53.308: INFO: 	Container kube-controller-manager ready: true, restart count 3
Jan 28 22:19:53.308: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Jan 28 22:19:53.308: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 28 22:19:53.308: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Jan 28 22:19:53.308: INFO: 	Container weave ready: true, restart count 0
Jan 28 22:19:53.308: INFO: 	Container weave-npc ready: true, restart count 0
Jan 28 22:19:53.308: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 28 22:19:53.308: INFO: 	Container kube-scheduler ready: true, restart count 4
Jan 28 22:19:53.308: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 28 22:19:53.308: INFO: 	Container kube-apiserver ready: true, restart count 1
Jan 28 22:19:53.308: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 28 22:19:53.308: INFO: 	Container etcd ready: true, restart count 1
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-9edc0eda-18ab-4510-ab48-13ac857e86e0 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-9edc0eda-18ab-4510-ab48-13ac857e86e0 off the node jerma-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-9edc0eda-18ab-4510-ab48-13ac857e86e0
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:20:11.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-9132" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:18.488 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching  [Conformance]","total":278,"completed":160,"skipped":2525,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:20:11.550: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:20:18.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-5522" for this suite.
STEP: Destroying namespace "nsdeletetest-8892" for this suite.
Jan 28 22:20:18.635: INFO: Namespace nsdeletetest-8892 was already deleted
STEP: Destroying namespace "nsdeletetest-225" for this suite.

• [SLOW TEST:7.099 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":161,"skipped":2530,"failed":0}
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:20:18.650: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test substitution in container's command
Jan 28 22:20:18.815: INFO: Waiting up to 5m0s for pod "var-expansion-40df1b35-fe2e-4e17-9ddb-d60532b2fe4a" in namespace "var-expansion-2684" to be "success or failure"
Jan 28 22:20:18.842: INFO: Pod "var-expansion-40df1b35-fe2e-4e17-9ddb-d60532b2fe4a": Phase="Pending", Reason="", readiness=false. Elapsed: 26.637646ms
Jan 28 22:20:20.849: INFO: Pod "var-expansion-40df1b35-fe2e-4e17-9ddb-d60532b2fe4a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033093212s
Jan 28 22:20:22.855: INFO: Pod "var-expansion-40df1b35-fe2e-4e17-9ddb-d60532b2fe4a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0395622s
Jan 28 22:20:24.864: INFO: Pod "var-expansion-40df1b35-fe2e-4e17-9ddb-d60532b2fe4a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048603916s
Jan 28 22:20:26.873: INFO: Pod "var-expansion-40df1b35-fe2e-4e17-9ddb-d60532b2fe4a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.057078257s
Jan 28 22:20:28.883: INFO: Pod "var-expansion-40df1b35-fe2e-4e17-9ddb-d60532b2fe4a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.067736913s
STEP: Saw pod success
Jan 28 22:20:28.884: INFO: Pod "var-expansion-40df1b35-fe2e-4e17-9ddb-d60532b2fe4a" satisfied condition "success or failure"
Jan 28 22:20:28.890: INFO: Trying to get logs from node jerma-node pod var-expansion-40df1b35-fe2e-4e17-9ddb-d60532b2fe4a container dapi-container: 
STEP: delete the pod
Jan 28 22:20:29.037: INFO: Waiting for pod var-expansion-40df1b35-fe2e-4e17-9ddb-d60532b2fe4a to disappear
Jan 28 22:20:29.095: INFO: Pod var-expansion-40df1b35-fe2e-4e17-9ddb-d60532b2fe4a no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:20:29.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-2684" for this suite.

• [SLOW TEST:10.455 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":162,"skipped":2530,"failed":0}
[sig-cli] Kubectl client Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:20:29.105: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1444
STEP: creating an pod
Jan 28 22:20:29.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-5491 -- logs-generator --log-lines-total 100 --run-duration 20s'
Jan 28 22:20:29.610: INFO: stderr: ""
Jan 28 22:20:29.610: INFO: stdout: "pod/logs-generator created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Waiting for log generator to start.
Jan 28 22:20:29.610: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator]
Jan 28 22:20:29.610: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-5491" to be "running and ready, or succeeded"
Jan 28 22:20:29.714: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 104.045074ms
Jan 28 22:20:31.723: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.112482674s
Jan 28 22:20:33.731: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.120924782s
Jan 28 22:20:35.739: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 6.128566289s
Jan 28 22:20:35.739: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded"
Jan 28 22:20:35.739: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator]
STEP: checking for a matching strings
Jan 28 22:20:35.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5491'
Jan 28 22:20:35.962: INFO: stderr: ""
Jan 28 22:20:35.962: INFO: stdout: "I0128 22:20:35.222333       1 logs_generator.go:76] 0 POST /api/v1/namespaces/kube-system/pods/vfk9 432\nI0128 22:20:35.422617       1 logs_generator.go:76] 1 PUT /api/v1/namespaces/default/pods/c49q 511\nI0128 22:20:35.622776       1 logs_generator.go:76] 2 GET /api/v1/namespaces/default/pods/zqx 225\nI0128 22:20:35.823141       1 logs_generator.go:76] 3 PUT /api/v1/namespaces/default/pods/vql 405\n"
STEP: limiting log lines
Jan 28 22:20:35.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5491 --tail=1'
Jan 28 22:20:36.194: INFO: stderr: ""
Jan 28 22:20:36.194: INFO: stdout: "I0128 22:20:36.023434       1 logs_generator.go:76] 4 GET /api/v1/namespaces/ns/pods/4vw 495\n"
Jan 28 22:20:36.194: INFO: got output "I0128 22:20:36.023434       1 logs_generator.go:76] 4 GET /api/v1/namespaces/ns/pods/4vw 495\n"
STEP: limiting log bytes
Jan 28 22:20:36.195: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5491 --limit-bytes=1'
Jan 28 22:20:36.329: INFO: stderr: ""
Jan 28 22:20:36.329: INFO: stdout: "I"
Jan 28 22:20:36.329: INFO: got output "I"
STEP: exposing timestamps
Jan 28 22:20:36.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5491 --tail=1 --timestamps'
Jan 28 22:20:36.453: INFO: stderr: ""
Jan 28 22:20:36.453: INFO: stdout: "2020-01-28T22:20:36.422760438Z I0128 22:20:36.422541       1 logs_generator.go:76] 6 POST /api/v1/namespaces/kube-system/pods/cbl4 382\n"
Jan 28 22:20:36.453: INFO: got output "2020-01-28T22:20:36.422760438Z I0128 22:20:36.422541       1 logs_generator.go:76] 6 POST /api/v1/namespaces/kube-system/pods/cbl4 382\n"
STEP: restricting to a time range
Jan 28 22:20:38.957: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5491 --since=1s'
Jan 28 22:20:39.202: INFO: stderr: ""
Jan 28 22:20:39.202: INFO: stdout: "I0128 22:20:38.222531       1 logs_generator.go:76] 15 POST /api/v1/namespaces/default/pods/qpgw 549\nI0128 22:20:38.422645       1 logs_generator.go:76] 16 GET /api/v1/namespaces/default/pods/9s64 404\nI0128 22:20:38.622683       1 logs_generator.go:76] 17 POST /api/v1/namespaces/ns/pods/lsll 560\nI0128 22:20:38.822978       1 logs_generator.go:76] 18 PUT /api/v1/namespaces/ns/pods/x5x 275\nI0128 22:20:39.022720       1 logs_generator.go:76] 19 POST /api/v1/namespaces/ns/pods/snr 553\n"
Jan 28 22:20:39.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5491 --since=24h'
Jan 28 22:20:39.368: INFO: stderr: ""
Jan 28 22:20:39.368: INFO: stdout: "I0128 22:20:35.222333       1 logs_generator.go:76] 0 POST /api/v1/namespaces/kube-system/pods/vfk9 432\nI0128 22:20:35.422617       1 logs_generator.go:76] 1 PUT /api/v1/namespaces/default/pods/c49q 511\nI0128 22:20:35.622776       1 logs_generator.go:76] 2 GET /api/v1/namespaces/default/pods/zqx 225\nI0128 22:20:35.823141       1 logs_generator.go:76] 3 PUT /api/v1/namespaces/default/pods/vql 405\nI0128 22:20:36.023434       1 logs_generator.go:76] 4 GET /api/v1/namespaces/ns/pods/4vw 495\nI0128 22:20:36.222652       1 logs_generator.go:76] 5 GET /api/v1/namespaces/kube-system/pods/d5d 236\nI0128 22:20:36.422541       1 logs_generator.go:76] 6 POST /api/v1/namespaces/kube-system/pods/cbl4 382\nI0128 22:20:36.622581       1 logs_generator.go:76] 7 POST /api/v1/namespaces/default/pods/4656 582\nI0128 22:20:36.822603       1 logs_generator.go:76] 8 GET /api/v1/namespaces/default/pods/tvpq 402\nI0128 22:20:37.022603       1 logs_generator.go:76] 9 PUT /api/v1/namespaces/default/pods/l59 479\nI0128 22:20:37.222733       1 logs_generator.go:76] 10 GET /api/v1/namespaces/default/pods/9pg 426\nI0128 22:20:37.422588       1 logs_generator.go:76] 11 GET /api/v1/namespaces/default/pods/tbcw 323\nI0128 22:20:37.622505       1 logs_generator.go:76] 12 PUT /api/v1/namespaces/kube-system/pods/xsb 371\nI0128 22:20:37.822787       1 logs_generator.go:76] 13 PUT /api/v1/namespaces/kube-system/pods/w54 570\nI0128 22:20:38.022633       1 logs_generator.go:76] 14 GET /api/v1/namespaces/kube-system/pods/6drt 250\nI0128 22:20:38.222531       1 logs_generator.go:76] 15 POST /api/v1/namespaces/default/pods/qpgw 549\nI0128 22:20:38.422645       1 logs_generator.go:76] 16 GET /api/v1/namespaces/default/pods/9s64 404\nI0128 22:20:38.622683       1 logs_generator.go:76] 17 POST /api/v1/namespaces/ns/pods/lsll 560\nI0128 22:20:38.822978       1 logs_generator.go:76] 18 PUT /api/v1/namespaces/ns/pods/x5x 275\nI0128 22:20:39.022720       1 logs_generator.go:76] 19 POST /api/v1/namespaces/ns/pods/snr 553\nI0128 22:20:39.222759       1 logs_generator.go:76] 20 GET /api/v1/namespaces/default/pods/6v7 485\n"
[AfterEach] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1450
Jan 28 22:20:39.368: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-5491'
Jan 28 22:20:44.448: INFO: stderr: ""
Jan 28 22:20:44.448: INFO: stdout: "pod \"logs-generator\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:20:44.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5491" for this suite.

• [SLOW TEST:15.351 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1440
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":278,"completed":163,"skipped":2530,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:20:44.458: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1672
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jan 28 22:20:44.509: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-1496'
Jan 28 22:20:44.622: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 28 22:20:44.622: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n"
STEP: verifying the rc e2e-test-httpd-rc was created
STEP: rolling-update to same image controller
Jan 28 22:20:44.656: INFO: scanned /root for discovery docs: 
Jan 28 22:20:44.657: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-1496'
Jan 28 22:21:05.789: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan 28 22:21:05.789: INFO: stdout: "Created e2e-test-httpd-rc-a95f4a16614b1d77f2a60b0550e68116\nScaling up e2e-test-httpd-rc-a95f4a16614b1d77f2a60b0550e68116 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-a95f4a16614b1d77f2a60b0550e68116 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-a95f4a16614b1d77f2a60b0550e68116 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n"
Jan 28 22:21:05.790: INFO: stdout: "Created e2e-test-httpd-rc-a95f4a16614b1d77f2a60b0550e68116\nScaling up e2e-test-httpd-rc-a95f4a16614b1d77f2a60b0550e68116 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-a95f4a16614b1d77f2a60b0550e68116 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-a95f4a16614b1d77f2a60b0550e68116 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up.
Jan 28 22:21:05.791: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-1496'
Jan 28 22:21:05.931: INFO: stderr: ""
Jan 28 22:21:05.931: INFO: stdout: "e2e-test-httpd-rc-a95f4a16614b1d77f2a60b0550e68116-pk8s9 e2e-test-httpd-rc-dc8nb "
STEP: Replicas for run=e2e-test-httpd-rc: expected=1 actual=2
Jan 28 22:21:10.932: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-1496'
Jan 28 22:21:11.090: INFO: stderr: ""
Jan 28 22:21:11.090: INFO: stdout: "e2e-test-httpd-rc-a95f4a16614b1d77f2a60b0550e68116-pk8s9 e2e-test-httpd-rc-dc8nb "
STEP: Replicas for run=e2e-test-httpd-rc: expected=1 actual=2
Jan 28 22:21:16.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-1496'
Jan 28 22:21:16.282: INFO: stderr: ""
Jan 28 22:21:16.283: INFO: stdout: "e2e-test-httpd-rc-a95f4a16614b1d77f2a60b0550e68116-pk8s9 "
Jan 28 22:21:16.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-a95f4a16614b1d77f2a60b0550e68116-pk8s9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1496'
Jan 28 22:21:16.420: INFO: stderr: ""
Jan 28 22:21:16.421: INFO: stdout: "true"
Jan 28 22:21:16.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-a95f4a16614b1d77f2a60b0550e68116-pk8s9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1496'
Jan 28 22:21:16.655: INFO: stderr: ""
Jan 28 22:21:16.655: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine"
Jan 28 22:21:16.655: INFO: e2e-test-httpd-rc-a95f4a16614b1d77f2a60b0550e68116-pk8s9 is verified up and running
[AfterEach] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1678
Jan 28 22:21:16.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-1496'
Jan 28 22:21:16.777: INFO: stderr: ""
Jan 28 22:21:16.777: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:21:16.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1496" for this suite.

• [SLOW TEST:32.334 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1667
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image  [Conformance]","total":278,"completed":164,"skipped":2552,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:21:16.793: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Jan 28 22:21:16.922: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-7509 /api/v1/namespaces/watch-7509/configmaps/e2e-watch-test-watch-closed 397cf5ae-889b-49b5-9c8b-eed3d23b3937 4975350 0 2020-01-28 22:21:16 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 28 22:21:16.923: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-7509 /api/v1/namespaces/watch-7509/configmaps/e2e-watch-test-watch-closed 397cf5ae-889b-49b5-9c8b-eed3d23b3937 4975351 0 2020-01-28 22:21:16 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Jan 28 22:21:16.989: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-7509 /api/v1/namespaces/watch-7509/configmaps/e2e-watch-test-watch-closed 397cf5ae-889b-49b5-9c8b-eed3d23b3937 4975352 0 2020-01-28 22:21:16 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 28 22:21:16.989: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-7509 /api/v1/namespaces/watch-7509/configmaps/e2e-watch-test-watch-closed 397cf5ae-889b-49b5-9c8b-eed3d23b3937 4975353 0 2020-01-28 22:21:16 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:21:16.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-7509" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":165,"skipped":2565,"failed":0}
SS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:21:17.009: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Jan 28 22:21:17.325: INFO: Waiting up to 5m0s for pod "downward-api-5dee22d4-4a9b-4a74-a96f-38fb06ce4ccc" in namespace "downward-api-1212" to be "success or failure"
Jan 28 22:21:17.502: INFO: Pod "downward-api-5dee22d4-4a9b-4a74-a96f-38fb06ce4ccc": Phase="Pending", Reason="", readiness=false. Elapsed: 176.14343ms
Jan 28 22:21:19.507: INFO: Pod "downward-api-5dee22d4-4a9b-4a74-a96f-38fb06ce4ccc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.18167614s
Jan 28 22:21:21.515: INFO: Pod "downward-api-5dee22d4-4a9b-4a74-a96f-38fb06ce4ccc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.189114315s
Jan 28 22:21:23.602: INFO: Pod "downward-api-5dee22d4-4a9b-4a74-a96f-38fb06ce4ccc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.276146196s
Jan 28 22:21:25.652: INFO: Pod "downward-api-5dee22d4-4a9b-4a74-a96f-38fb06ce4ccc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.326196769s
Jan 28 22:21:27.662: INFO: Pod "downward-api-5dee22d4-4a9b-4a74-a96f-38fb06ce4ccc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.336385016s
STEP: Saw pod success
Jan 28 22:21:27.662: INFO: Pod "downward-api-5dee22d4-4a9b-4a74-a96f-38fb06ce4ccc" satisfied condition "success or failure"
Jan 28 22:21:27.677: INFO: Trying to get logs from node jerma-node pod downward-api-5dee22d4-4a9b-4a74-a96f-38fb06ce4ccc container dapi-container: 
STEP: delete the pod
Jan 28 22:21:27.822: INFO: Waiting for pod downward-api-5dee22d4-4a9b-4a74-a96f-38fb06ce4ccc to disappear
Jan 28 22:21:27.833: INFO: Pod downward-api-5dee22d4-4a9b-4a74-a96f-38fb06ce4ccc no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:21:27.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1212" for this suite.

• [SLOW TEST:10.847 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":166,"skipped":2567,"failed":0}
SSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:21:27.856: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-secret-c6sn
STEP: Creating a pod to test atomic-volume-subpath
Jan 28 22:21:28.053: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-c6sn" in namespace "subpath-2005" to be "success or failure"
Jan 28 22:21:28.107: INFO: Pod "pod-subpath-test-secret-c6sn": Phase="Pending", Reason="", readiness=false. Elapsed: 53.269961ms
Jan 28 22:21:30.126: INFO: Pod "pod-subpath-test-secret-c6sn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073060482s
Jan 28 22:21:32.133: INFO: Pod "pod-subpath-test-secret-c6sn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.08021841s
Jan 28 22:21:34.141: INFO: Pod "pod-subpath-test-secret-c6sn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.087853s
Jan 28 22:21:36.152: INFO: Pod "pod-subpath-test-secret-c6sn": Phase="Running", Reason="", readiness=true. Elapsed: 8.098250433s
Jan 28 22:21:38.162: INFO: Pod "pod-subpath-test-secret-c6sn": Phase="Running", Reason="", readiness=true. Elapsed: 10.108285438s
Jan 28 22:21:40.169: INFO: Pod "pod-subpath-test-secret-c6sn": Phase="Running", Reason="", readiness=true. Elapsed: 12.116021625s
Jan 28 22:21:42.176: INFO: Pod "pod-subpath-test-secret-c6sn": Phase="Running", Reason="", readiness=true. Elapsed: 14.122927628s
Jan 28 22:21:44.180: INFO: Pod "pod-subpath-test-secret-c6sn": Phase="Running", Reason="", readiness=true. Elapsed: 16.126803557s
Jan 28 22:21:46.186: INFO: Pod "pod-subpath-test-secret-c6sn": Phase="Running", Reason="", readiness=true. Elapsed: 18.132453096s
Jan 28 22:21:48.191: INFO: Pod "pod-subpath-test-secret-c6sn": Phase="Running", Reason="", readiness=true. Elapsed: 20.137622885s
Jan 28 22:21:50.217: INFO: Pod "pod-subpath-test-secret-c6sn": Phase="Running", Reason="", readiness=true. Elapsed: 22.163496287s
Jan 28 22:21:52.226: INFO: Pod "pod-subpath-test-secret-c6sn": Phase="Running", Reason="", readiness=true. Elapsed: 24.172900565s
Jan 28 22:21:54.233: INFO: Pod "pod-subpath-test-secret-c6sn": Phase="Running", Reason="", readiness=true. Elapsed: 26.180176302s
Jan 28 22:21:56.240: INFO: Pod "pod-subpath-test-secret-c6sn": Phase="Running", Reason="", readiness=true. Elapsed: 28.186638137s
Jan 28 22:21:58.251: INFO: Pod "pod-subpath-test-secret-c6sn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.197222532s
STEP: Saw pod success
Jan 28 22:21:58.251: INFO: Pod "pod-subpath-test-secret-c6sn" satisfied condition "success or failure"
Jan 28 22:21:58.257: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-secret-c6sn container test-container-subpath-secret-c6sn: 
STEP: delete the pod
Jan 28 22:21:58.313: INFO: Waiting for pod pod-subpath-test-secret-c6sn to disappear
Jan 28 22:21:58.318: INFO: Pod pod-subpath-test-secret-c6sn no longer exists
STEP: Deleting pod pod-subpath-test-secret-c6sn
Jan 28 22:21:58.318: INFO: Deleting pod "pod-subpath-test-secret-c6sn" in namespace "subpath-2005"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:21:58.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-2005" for this suite.

• [SLOW TEST:30.480 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":167,"skipped":2571,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:21:58.337: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service externalname-service with the type=ExternalName in namespace services-4088
STEP: changing the ExternalName service to type=ClusterIP
STEP: creating replication controller externalname-service in namespace services-4088
I0128 22:21:58.689150       8 runners.go:189] Created replication controller with name: externalname-service, namespace: services-4088, replica count: 2
I0128 22:22:01.740457       8 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0128 22:22:04.741022       8 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0128 22:22:07.742188       8 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 28 22:22:07.742: INFO: Creating new exec pod
Jan 28 22:22:16.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4088 execpod594p4 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Jan 28 22:22:17.260: INFO: stderr: "I0128 22:22:17.085224    3635 log.go:172] (0xc00090c0b0) (0xc0006379a0) Create stream\nI0128 22:22:17.085557    3635 log.go:172] (0xc00090c0b0) (0xc0006379a0) Stream added, broadcasting: 1\nI0128 22:22:17.089456    3635 log.go:172] (0xc00090c0b0) Reply frame received for 1\nI0128 22:22:17.089493    3635 log.go:172] (0xc00090c0b0) (0xc000928000) Create stream\nI0128 22:22:17.089502    3635 log.go:172] (0xc00090c0b0) (0xc000928000) Stream added, broadcasting: 3\nI0128 22:22:17.090655    3635 log.go:172] (0xc00090c0b0) Reply frame received for 3\nI0128 22:22:17.090687    3635 log.go:172] (0xc00090c0b0) (0xc00060e5a0) Create stream\nI0128 22:22:17.090701    3635 log.go:172] (0xc00090c0b0) (0xc00060e5a0) Stream added, broadcasting: 5\nI0128 22:22:17.091785    3635 log.go:172] (0xc00090c0b0) Reply frame received for 5\nI0128 22:22:17.157902    3635 log.go:172] (0xc00090c0b0) Data frame received for 5\nI0128 22:22:17.157972    3635 log.go:172] (0xc00060e5a0) (5) Data frame handling\nI0128 22:22:17.157998    3635 log.go:172] (0xc00060e5a0) (5) Data frame sent\nI0128 22:22:17.158003    3635 log.go:172] (0xc00090c0b0) Data frame received for 5\nI0128 22:22:17.158008    3635 log.go:172] (0xc00060e5a0) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0128 22:22:17.158036    3635 log.go:172] (0xc00060e5a0) (5) Data frame sent\nI0128 22:22:17.249826    3635 log.go:172] (0xc00090c0b0) Data frame received for 1\nI0128 22:22:17.250004    3635 log.go:172] (0xc00090c0b0) (0xc00060e5a0) Stream removed, broadcasting: 5\nI0128 22:22:17.250127    3635 log.go:172] (0xc00090c0b0) (0xc000928000) Stream removed, broadcasting: 3\nI0128 22:22:17.250184    3635 log.go:172] (0xc0006379a0) (1) Data frame handling\nI0128 22:22:17.250236    3635 log.go:172] (0xc0006379a0) (1) Data frame sent\nI0128 22:22:17.250252    3635 log.go:172] (0xc00090c0b0) (0xc0006379a0) Stream removed, broadcasting: 1\nI0128 22:22:17.251059    3635 log.go:172] (0xc00090c0b0) Go away received\nI0128 22:22:17.251695    3635 log.go:172] (0xc00090c0b0) (0xc0006379a0) Stream removed, broadcasting: 1\nI0128 22:22:17.251723    3635 log.go:172] (0xc00090c0b0) (0xc000928000) Stream removed, broadcasting: 3\nI0128 22:22:17.251732    3635 log.go:172] (0xc00090c0b0) (0xc00060e5a0) Stream removed, broadcasting: 5\n"
Jan 28 22:22:17.260: INFO: stdout: ""
Jan 28 22:22:17.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4088 execpod594p4 -- /bin/sh -x -c nc -zv -t -w 2 10.96.19.154 80'
Jan 28 22:22:17.546: INFO: stderr: "I0128 22:22:17.413131    3656 log.go:172] (0xc000969340) (0xc000b28320) Create stream\nI0128 22:22:17.413336    3656 log.go:172] (0xc000969340) (0xc000b28320) Stream added, broadcasting: 1\nI0128 22:22:17.419113    3656 log.go:172] (0xc000969340) Reply frame received for 1\nI0128 22:22:17.419148    3656 log.go:172] (0xc000969340) (0xc0006e2640) Create stream\nI0128 22:22:17.419159    3656 log.go:172] (0xc000969340) (0xc0006e2640) Stream added, broadcasting: 3\nI0128 22:22:17.420280    3656 log.go:172] (0xc000969340) Reply frame received for 3\nI0128 22:22:17.420302    3656 log.go:172] (0xc000969340) (0xc00056d400) Create stream\nI0128 22:22:17.420312    3656 log.go:172] (0xc000969340) (0xc00056d400) Stream added, broadcasting: 5\nI0128 22:22:17.421414    3656 log.go:172] (0xc000969340) Reply frame received for 5\nI0128 22:22:17.472059    3656 log.go:172] (0xc000969340) Data frame received for 5\nI0128 22:22:17.472160    3656 log.go:172] (0xc00056d400) (5) Data frame handling\nI0128 22:22:17.472202    3656 log.go:172] (0xc00056d400) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.19.154 80\nI0128 22:22:17.474523    3656 log.go:172] (0xc000969340) Data frame received for 5\nI0128 22:22:17.474641    3656 log.go:172] (0xc00056d400) (5) Data frame handling\nI0128 22:22:17.474721    3656 log.go:172] (0xc00056d400) (5) Data frame sent\nConnection to 10.96.19.154 80 port [tcp/http] succeeded!\nI0128 22:22:17.534365    3656 log.go:172] (0xc000969340) Data frame received for 1\nI0128 22:22:17.534503    3656 log.go:172] (0xc000b28320) (1) Data frame handling\nI0128 22:22:17.534526    3656 log.go:172] (0xc000b28320) (1) Data frame sent\nI0128 22:22:17.534571    3656 log.go:172] (0xc000969340) (0xc000b28320) Stream removed, broadcasting: 1\nI0128 22:22:17.536138    3656 log.go:172] (0xc000969340) (0xc0006e2640) Stream removed, broadcasting: 3\nI0128 22:22:17.536299    3656 log.go:172] (0xc000969340) (0xc00056d400) Stream removed, broadcasting: 5\nI0128 22:22:17.536336    3656 log.go:172] (0xc000969340) Go away received\nI0128 22:22:17.536745    3656 log.go:172] (0xc000969340) (0xc000b28320) Stream removed, broadcasting: 1\nI0128 22:22:17.536775    3656 log.go:172] (0xc000969340) (0xc0006e2640) Stream removed, broadcasting: 3\nI0128 22:22:17.536783    3656 log.go:172] (0xc000969340) (0xc00056d400) Stream removed, broadcasting: 5\n"
Jan 28 22:22:17.546: INFO: stdout: ""
Jan 28 22:22:17.546: INFO: Cleaning up the ExternalName to ClusterIP test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:22:17.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4088" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:19.289 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":168,"skipped":2599,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:22:17.626: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 28 22:22:17.682: INFO: Creating deployment "webserver-deployment"
Jan 28 22:22:17.686: INFO: Waiting for observed generation 1
Jan 28 22:22:20.034: INFO: Waiting for all required pods to come up
Jan 28 22:22:20.123: INFO: Pod name httpd: Found 10 pods out of 10
STEP: ensuring each pod is running
Jan 28 22:22:48.248: INFO: Waiting for deployment "webserver-deployment" to complete
Jan 28 22:22:48.255: INFO: Updating deployment "webserver-deployment" with a non-existent image
Jan 28 22:22:48.263: INFO: Updating deployment webserver-deployment
Jan 28 22:22:48.263: INFO: Waiting for observed generation 2
Jan 28 22:22:50.328: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Jan 28 22:22:50.701: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Jan 28 22:22:50.746: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Jan 28 22:22:50.978: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Jan 28 22:22:50.978: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Jan 28 22:22:51.076: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Jan 28 22:22:51.084: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas
Jan 28 22:22:51.084: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30
Jan 28 22:22:51.095: INFO: Updating deployment webserver-deployment
Jan 28 22:22:51.095: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas
Jan 28 22:22:51.549: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Jan 28 22:22:51.604: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Jan 28 22:22:52.178: INFO: Deployment "webserver-deployment":
&Deployment{ObjectMeta:{webserver-deployment  deployment-1474 /apis/apps/v1/namespaces/deployment-1474/deployments/webserver-deployment d890ab6b-8bc1-42f7-94c2-e5165a5d6dd2 4975895 3 2020-01-28 22:22:17 +0000 UTC   map[name:httpd] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005b116d8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-01-28 22:22:50 +0000 UTC,LastTransitionTime:2020-01-28 22:22:17 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-01-28 22:22:51 +0000 UTC,LastTransitionTime:2020-01-28 22:22:51 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},}

Jan 28 22:22:52.192: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment":
&ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8  deployment-1474 /apis/apps/v1/namespaces/deployment-1474/replicasets/webserver-deployment-c7997dcc8 96ce3ee9-2b28-4bce-a52e-5572702c0e5c 4975878 3 2020-01-28 22:22:48 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment d890ab6b-8bc1-42f7-94c2-e5165a5d6dd2 0xc005b11ba7 0xc005b11ba8}] []  []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005b11c18  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jan 28 22:22:52.192: INFO: All old ReplicaSets of Deployment "webserver-deployment":
Jan 28 22:22:52.192: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587  deployment-1474 /apis/apps/v1/namespaces/deployment-1474/replicasets/webserver-deployment-595b5b9587 c0e177e7-6158-478c-a034-a99521b5e5aa 4975875 3 2020-01-28 22:22:17 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment d890ab6b-8bc1-42f7-94c2-e5165a5d6dd2 0xc005b11ae7 0xc005b11ae8}] []  []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005b11b48  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},}
Jan 28 22:22:52.595: INFO: Pod "webserver-deployment-595b5b9587-2gdp4" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-2gdp4 webserver-deployment-595b5b9587- deployment-1474 /api/v1/namespaces/deployment-1474/pods/webserver-deployment-595b5b9587-2gdp4 dba9c59c-5334-4a15-a68b-061fe0084544 4975926 0 2020-01-28 22:22:51 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c0e177e7-6158-478c-a034-a99521b5e5aa 0xc0048960b7 0xc0048960b8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c5wfq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c5wfq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c5wfq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:22:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 28 22:22:52.596: INFO: Pod "webserver-deployment-595b5b9587-46m6b" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-46m6b webserver-deployment-595b5b9587- deployment-1474 /api/v1/namespaces/deployment-1474/pods/webserver-deployment-595b5b9587-46m6b 24a0b251-4f88-41d6-b3fe-434502388aba 4975903 0 2020-01-28 22:22:51 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c0e177e7-6158-478c-a034-a99521b5e5aa 0xc0048961d7 0xc0048961d8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c5wfq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c5wfq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c5wfq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:22:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 28 22:22:52.596: INFO: Pod "webserver-deployment-595b5b9587-4p7v5" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-4p7v5 webserver-deployment-595b5b9587- deployment-1474 /api/v1/namespaces/deployment-1474/pods/webserver-deployment-595b5b9587-4p7v5 66213f5f-3490-4659-9d71-c99b275366b3 4975925 0 2020-01-28 22:22:51 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c0e177e7-6158-478c-a034-a99521b5e5aa 0xc004896317 0xc004896318}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c5wfq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c5wfq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c5wfq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:22:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 28 22:22:52.597: INFO: Pod "webserver-deployment-595b5b9587-6qhvc" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-6qhvc webserver-deployment-595b5b9587- deployment-1474 /api/v1/namespaces/deployment-1474/pods/webserver-deployment-595b5b9587-6qhvc 34e1f3c2-6839-44cf-b3eb-3a7afeaa8c20 4975907 0 2020-01-28 22:22:51 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c0e177e7-6158-478c-a034-a99521b5e5aa 0xc004896437 0xc004896438}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c5wfq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c5wfq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c5wfq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:22:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 28 22:22:52.597: INFO: Pod "webserver-deployment-595b5b9587-6v7df" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-6v7df webserver-deployment-595b5b9587- deployment-1474 /api/v1/namespaces/deployment-1474/pods/webserver-deployment-595b5b9587-6v7df dd127d3c-92fa-4778-ba90-52ab80f92bc0 4975887 0 2020-01-28 22:22:51 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c0e177e7-6158-478c-a034-a99521b5e5aa 0xc004896557 0xc004896558}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c5wfq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c5wfq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c5wfq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:22:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 28 22:22:52.598: INFO: Pod "webserver-deployment-595b5b9587-clrhl" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-clrhl webserver-deployment-595b5b9587- deployment-1474 /api/v1/namespaces/deployment-1474/pods/webserver-deployment-595b5b9587-clrhl 043bcfe7-6c76-4cfd-93ee-b6446a1467a8 4975798 0 2020-01-28 22:22:17 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c0e177e7-6158-478c-a034-a99521b5e5aa 0xc004896687 0xc004896688}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c5wfq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c5wfq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c5wfq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:22:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:22:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:22:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:22:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.7,StartTime:2020-01-28 22:22:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-28 22:22:45 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://7c534bacb56266590a52b76f2d778adb68f3beeba8b66669b8aeb615432606d7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.7,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 28 22:22:52.598: INFO: Pod "webserver-deployment-595b5b9587-gch94" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-gch94 webserver-deployment-595b5b9587- deployment-1474 /api/v1/namespaces/deployment-1474/pods/webserver-deployment-595b5b9587-gch94 c3f3d2df-65c5-4865-8551-5e90f0a1cdbb 4975906 0 2020-01-28 22:22:51 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c0e177e7-6158-478c-a034-a99521b5e5aa 0xc004896800 0xc004896801}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c5wfq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c5wfq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c5wfq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:22:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 28 22:22:52.599: INFO: Pod "webserver-deployment-595b5b9587-j9r9w" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-j9r9w webserver-deployment-595b5b9587- deployment-1474 /api/v1/namespaces/deployment-1474/pods/webserver-deployment-595b5b9587-j9r9w 410140f4-de60-4d72-9bc0-ff3afce50b58 4975777 0 2020-01-28 22:22:17 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c0e177e7-6158-478c-a034-a99521b5e5aa 0xc004896907 0xc004896908}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c5wfq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c5wfq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c5wfq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:22:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:22:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:22:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:22:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.5,StartTime:2020-01-28 22:22:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-28 22:22:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://8290d7de810826a4f872505185194b80924f0e25e2aca06441e4fdf4d4be7285,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.5,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 28 22:22:52.599: INFO: Pod "webserver-deployment-595b5b9587-jlqlt" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-jlqlt webserver-deployment-595b5b9587- deployment-1474 /api/v1/namespaces/deployment-1474/pods/webserver-deployment-595b5b9587-jlqlt 973dc121-d91e-4de7-9c07-3e433959b573 4975883 0 2020-01-28 22:22:51 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c0e177e7-6158-478c-a034-a99521b5e5aa 0xc004896a90 0xc004896a91}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c5wfq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c5wfq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c5wfq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:22:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 28 22:22:52.599: INFO: Pod "webserver-deployment-595b5b9587-k8mbv" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-k8mbv webserver-deployment-595b5b9587- deployment-1474 /api/v1/namespaces/deployment-1474/pods/webserver-deployment-595b5b9587-k8mbv 0689fd4d-aab6-4166-9bdf-3508987f155f 4975927 0 2020-01-28 22:22:51 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c0e177e7-6158-478c-a034-a99521b5e5aa 0xc004896bd7 0xc004896bd8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c5wfq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c5wfq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c5wfq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:22:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 28 22:22:52.600: INFO: Pod "webserver-deployment-595b5b9587-lz296" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-lz296 webserver-deployment-595b5b9587- deployment-1474 /api/v1/namespaces/deployment-1474/pods/webserver-deployment-595b5b9587-lz296 5fbd1ac4-da2f-46fd-b1e9-d7f4910ab5f9 4975806 0 2020-01-28 22:22:17 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c0e177e7-6158-478c-a034-a99521b5e5aa 0xc004896cf7 0xc004896cf8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c5wfq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c5wfq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c5wfq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:22:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:22:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:22:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:22:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.6,StartTime:2020-01-28 22:22:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-28 22:22:45 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://cb1a3a69b8887b3931b14cfc3d268941e8d23321d78afef16620d48ff91bcc91,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.6,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 28 22:22:52.601: INFO: Pod "webserver-deployment-595b5b9587-mtxqr" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-mtxqr webserver-deployment-595b5b9587- deployment-1474 /api/v1/namespaces/deployment-1474/pods/webserver-deployment-595b5b9587-mtxqr b3240c22-5adf-4bc1-b89f-0a80a53c1729 4975780 0 2020-01-28 22:22:17 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c0e177e7-6158-478c-a034-a99521b5e5aa 0xc004896e80 0xc004896e81}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c5wfq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c5wfq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c5wfq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:22:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:22:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:22:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:22:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.6,StartTime:2020-01-28 22:22:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-28 22:22:41 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://4783fdf63b25c80ddcbaa14a78320104ff3555ed9990eb9634e4e5b6cf8b11db,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.6,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 28 22:22:52.602: INFO: Pod "webserver-deployment-595b5b9587-pmpqr" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-pmpqr webserver-deployment-595b5b9587- deployment-1474 /api/v1/namespaces/deployment-1474/pods/webserver-deployment-595b5b9587-pmpqr 39d4e145-e703-4c00-bbb0-49502a8ee343 4975902 0 2020-01-28 22:22:51 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c0e177e7-6158-478c-a034-a99521b5e5aa 0xc004896fe0 0xc004896fe1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c5wfq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c5wfq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c5wfq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:22:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 28 22:22:52.603: INFO: Pod "webserver-deployment-595b5b9587-qhqxg" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-qhqxg webserver-deployment-595b5b9587- deployment-1474 /api/v1/namespaces/deployment-1474/pods/webserver-deployment-595b5b9587-qhqxg 624beec4-9a50-4325-91f2-4b8e958bedee 4975921 0 2020-01-28 22:22:51 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c0e177e7-6158-478c-a034-a99521b5e5aa 0xc0048970e7 0xc0048970e8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c5wfq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c5wfq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c5wfq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:22:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 28 22:22:52.603: INFO: Pod "webserver-deployment-595b5b9587-sdvdg" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-sdvdg webserver-deployment-595b5b9587- deployment-1474 /api/v1/namespaces/deployment-1474/pods/webserver-deployment-595b5b9587-sdvdg f9080073-55b5-4028-b0db-6aecb1b4037c 4975924 0 2020-01-28 22:22:51 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c0e177e7-6158-478c-a034-a99521b5e5aa 0xc0048971f7 0xc0048971f8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c5wfq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c5wfq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c5wfq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:22:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:22:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:22:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:22:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-28 22:22:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 28 22:22:52.603: INFO: Pod "webserver-deployment-595b5b9587-vv9tv" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-vv9tv webserver-deployment-595b5b9587- deployment-1474 /api/v1/namespaces/deployment-1474/pods/webserver-deployment-595b5b9587-vv9tv b82c7b5d-6a8e-413e-b2d5-1d64cbef7d64 4975774 0 2020-01-28 22:22:17 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c0e177e7-6158-478c-a034-a99521b5e5aa 0xc004897357 0xc004897358}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c5wfq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c5wfq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c5wfq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:22:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:22:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:22:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:22:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.8,StartTime:2020-01-28 22:22:18 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-28 22:22:43 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://01c98ec5a1fca2fd0090cd6fe9e15a89e17ff54fde5d62ef76f998bd20f0a4f9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.8,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 28 22:22:52.604: INFO: Pod "webserver-deployment-595b5b9587-w5g4m" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-w5g4m webserver-deployment-595b5b9587- deployment-1474 /api/v1/namespaces/deployment-1474/pods/webserver-deployment-595b5b9587-w5g4m 7dd0e461-eabc-4216-bb81-b24da31dcba4 4975783 0 2020-01-28 22:22:17 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c0e177e7-6158-478c-a034-a99521b5e5aa 0xc0048974c0 0xc0048974c1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c5wfq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c5wfq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c5wfq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:22:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:22:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:22:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:22:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.7,StartTime:2020-01-28 22:22:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-28 22:22:43 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://8fc3605f875e230973e21fc3e3e06c7232dc9e891c3726ab541995f3e5f7616b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.7,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 28 22:22:52.604: INFO: Pod "webserver-deployment-595b5b9587-x2twt" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-x2twt webserver-deployment-595b5b9587- deployment-1474 /api/v1/namespaces/deployment-1474/pods/webserver-deployment-595b5b9587-x2twt c1774df0-c9e2-42c9-b659-279b2ef2012f 4975769 0 2020-01-28 22:22:17 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c0e177e7-6158-478c-a034-a99521b5e5aa 0xc004897620 0xc004897621}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c5wfq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c5wfq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c5wfq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:22:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:22:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:22:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:22:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.3,StartTime:2020-01-28 22:22:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-28 22:22:41 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://92ff3bf8e816845f7bc518275eb8df8027e6d9e74c980a75fef67daf51f58fe9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.3,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 28 22:22:52.604: INFO: Pod "webserver-deployment-595b5b9587-xv56n" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-xv56n webserver-deployment-595b5b9587- deployment-1474 /api/v1/namespaces/deployment-1474/pods/webserver-deployment-595b5b9587-xv56n f9407dde-84a0-48e9-b48a-a75b7ff9c2cc 4975793 0 2020-01-28 22:22:17 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c0e177e7-6158-478c-a034-a99521b5e5aa 0xc004897790 0xc004897791}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c5wfq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c5wfq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c5wfq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:22:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:22:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:22:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:22:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.9,StartTime:2020-01-28 22:22:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-28 22:22:44 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://2d9f07a1849c38db81d58b610f37247fbb57ee113f2e2fd0e5c9a7e3bb0235c2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.9,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 28 22:22:52.605: INFO: Pod "webserver-deployment-595b5b9587-zfwht" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-zfwht webserver-deployment-595b5b9587- deployment-1474 /api/v1/namespaces/deployment-1474/pods/webserver-deployment-595b5b9587-zfwht c277dbfb-eae3-4ba2-95c4-91c4bf02b9ee 4975908 0 2020-01-28 22:22:51 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c0e177e7-6158-478c-a034-a99521b5e5aa 0xc0048978f0 0xc0048978f1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c5wfq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c5wfq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c5wfq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:22:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 28 22:22:52.605: INFO: Pod "webserver-deployment-c7997dcc8-2bts5" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-2bts5 webserver-deployment-c7997dcc8- deployment-1474 /api/v1/namespaces/deployment-1474/pods/webserver-deployment-c7997dcc8-2bts5 e973761c-041e-47f9-aaff-d007fb5f62d5 4975933 0 2020-01-28 22:22:52 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 96ce3ee9-2b28-4bce-a52e-5572702c0e5c 0xc004897a07 0xc004897a08}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c5wfq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c5wfq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c5wfq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:22:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 28 22:22:52.606: INFO: Pod "webserver-deployment-c7997dcc8-47vcj" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-47vcj webserver-deployment-c7997dcc8- deployment-1474 /api/v1/namespaces/deployment-1474/pods/webserver-deployment-c7997dcc8-47vcj 6de19528-1d26-4302-af9d-ddb00e52db15 4975920 0 2020-01-28 22:22:51 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 96ce3ee9-2b28-4bce-a52e-5572702c0e5c 0xc004897b37 0xc004897b38}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c5wfq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c5wfq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c5wfq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:22:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 28 22:22:52.606: INFO: Pod "webserver-deployment-c7997dcc8-9mnts" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-9mnts webserver-deployment-c7997dcc8- deployment-1474 /api/v1/namespaces/deployment-1474/pods/webserver-deployment-c7997dcc8-9mnts 70634ab7-6b3c-43b2-85da-9b9d68c0da2c 4975932 0 2020-01-28 22:22:52 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 96ce3ee9-2b28-4bce-a52e-5572702c0e5c 0xc004897c67 0xc004897c68}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c5wfq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c5wfq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c5wfq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 28 22:22:52.606: INFO: Pod "webserver-deployment-c7997dcc8-9mskg" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-9mskg webserver-deployment-c7997dcc8- deployment-1474 /api/v1/namespaces/deployment-1474/pods/webserver-deployment-c7997dcc8-9mskg 30faf0b0-cb28-41f8-bd08-ad9bc231aafe 4975904 0 2020-01-28 22:22:51 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 96ce3ee9-2b28-4bce-a52e-5572702c0e5c 0xc004897d70 0xc004897d71}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c5wfq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c5wfq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c5wfq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:22:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 28 22:22:52.607: INFO: Pod "webserver-deployment-c7997dcc8-dsjgc" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-dsjgc webserver-deployment-c7997dcc8- deployment-1474 /api/v1/namespaces/deployment-1474/pods/webserver-deployment-c7997dcc8-dsjgc 0f011e37-79bd-4b28-b43a-ad83e084b4be 4975842 0 2020-01-28 22:22:48 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 96ce3ee9-2b28-4bce-a52e-5572702c0e5c 0xc004897e97 0xc004897e98}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c5wfq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c5wfq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c5wfq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:22:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:22:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:22:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:22:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-28 22:22:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 28 22:22:52.607: INFO: Pod "webserver-deployment-c7997dcc8-f42b7" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-f42b7 webserver-deployment-c7997dcc8- deployment-1474 /api/v1/namespaces/deployment-1474/pods/webserver-deployment-c7997dcc8-f42b7 7f72cf26-82f2-4ede-9c82-0ffc81a91d72 4975853 0 2020-01-28 22:22:48 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 96ce3ee9-2b28-4bce-a52e-5572702c0e5c 0xc002988027 0xc002988028}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c5wfq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c5wfq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c5wfq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:22:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:22:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:22:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:22:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-28 22:22:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 28 22:22:52.608: INFO: Pod "webserver-deployment-c7997dcc8-lx25w" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-lx25w webserver-deployment-c7997dcc8- deployment-1474 /api/v1/namespaces/deployment-1474/pods/webserver-deployment-c7997dcc8-lx25w 0c86c78f-408f-49a3-b35e-1fcf7e4007ce 4975934 0 2020-01-28 22:22:52 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 96ce3ee9-2b28-4bce-a52e-5572702c0e5c 0xc0029881a7 0xc0029881a8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c5wfq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c5wfq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c5wfq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:22:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 28 22:22:52.608: INFO: Pod "webserver-deployment-c7997dcc8-q5bk8" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-q5bk8 webserver-deployment-c7997dcc8- deployment-1474 /api/v1/namespaces/deployment-1474/pods/webserver-deployment-c7997dcc8-q5bk8 0ec8f1b5-fa1a-4527-a560-6813db08b782 4975863 0 2020-01-28 22:22:48 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 96ce3ee9-2b28-4bce-a52e-5572702c0e5c 0xc0029882c7 0xc0029882c8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c5wfq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c5wfq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c5wfq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:22:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:22:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:22:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:22:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-28 22:22:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 28 22:22:52.609: INFO: Pod "webserver-deployment-c7997dcc8-rghhl" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-rghhl webserver-deployment-c7997dcc8- deployment-1474 /api/v1/namespaces/deployment-1474/pods/webserver-deployment-c7997dcc8-rghhl c5c4f242-407c-411a-8c4c-5d8b6dea8f58 4975930 0 2020-01-28 22:22:52 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 96ce3ee9-2b28-4bce-a52e-5572702c0e5c 0xc002988447 0xc002988448}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c5wfq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c5wfq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c5wfq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:22:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 28 22:22:52.609: INFO: Pod "webserver-deployment-c7997dcc8-t68w8" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-t68w8 webserver-deployment-c7997dcc8- deployment-1474 /api/v1/namespaces/deployment-1474/pods/webserver-deployment-c7997dcc8-t68w8 111c50e7-59e5-4643-90f6-4d00080a36f1 4975865 0 2020-01-28 22:22:48 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 96ce3ee9-2b28-4bce-a52e-5572702c0e5c 0xc002988577 0xc002988578}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c5wfq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c5wfq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c5wfq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:22:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:22:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:22:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:22:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-01-28 22:22:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 28 22:22:52.610: INFO: Pod "webserver-deployment-c7997dcc8-wp2cb" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-wp2cb webserver-deployment-c7997dcc8- deployment-1474 /api/v1/namespaces/deployment-1474/pods/webserver-deployment-c7997dcc8-wp2cb b82cf32f-f91d-45e3-8881-5eee02a4c50d 4975845 0 2020-01-28 22:22:48 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 96ce3ee9-2b28-4bce-a52e-5572702c0e5c 0xc0029886e7 0xc0029886e8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c5wfq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c5wfq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c5wfq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:22:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:22:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:22:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:22:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-01-28 22:22:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 28 22:22:52.610: INFO: Pod "webserver-deployment-c7997dcc8-xwc7x" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-xwc7x webserver-deployment-c7997dcc8- deployment-1474 /api/v1/namespaces/deployment-1474/pods/webserver-deployment-c7997dcc8-xwc7x fb63116f-54a7-4d35-8fc8-0fbe4b972ee7 4975928 0 2020-01-28 22:22:51 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 96ce3ee9-2b28-4bce-a52e-5572702c0e5c 0xc002988857 0xc002988858}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c5wfq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c5wfq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c5wfq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:22:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 28 22:22:52.610: INFO: Pod "webserver-deployment-c7997dcc8-xz6rx" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-xz6rx webserver-deployment-c7997dcc8- deployment-1474 /api/v1/namespaces/deployment-1474/pods/webserver-deployment-c7997dcc8-xz6rx 0799e2ad-a1b1-474a-9cdb-e29ef18f0f93 4975931 0 2020-01-28 22:22:52 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 96ce3ee9-2b28-4bce-a52e-5572702c0e5c 0xc002988977 0xc002988978}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c5wfq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c5wfq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c5wfq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 22:22:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:22:52.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-1474" for this suite.

• [SLOW TEST:36.956 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":169,"skipped":2643,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:22:54.586: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 28 22:23:00.282: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9ad5c57b-2a4b-4bc2-9226-02de3a045010" in namespace "downward-api-435" to be "success or failure"
Jan 28 22:23:01.146: INFO: Pod "downwardapi-volume-9ad5c57b-2a4b-4bc2-9226-02de3a045010": Phase="Pending", Reason="", readiness=false. Elapsed: 863.511222ms
Jan 28 22:23:03.349: INFO: Pod "downwardapi-volume-9ad5c57b-2a4b-4bc2-9226-02de3a045010": Phase="Pending", Reason="", readiness=false. Elapsed: 3.066638449s
Jan 28 22:23:07.261: INFO: Pod "downwardapi-volume-9ad5c57b-2a4b-4bc2-9226-02de3a045010": Phase="Pending", Reason="", readiness=false. Elapsed: 6.978297218s
Jan 28 22:23:09.513: INFO: Pod "downwardapi-volume-9ad5c57b-2a4b-4bc2-9226-02de3a045010": Phase="Pending", Reason="", readiness=false. Elapsed: 9.23082815s
Jan 28 22:23:12.910: INFO: Pod "downwardapi-volume-9ad5c57b-2a4b-4bc2-9226-02de3a045010": Phase="Pending", Reason="", readiness=false. Elapsed: 12.627935374s
Jan 28 22:23:15.685: INFO: Pod "downwardapi-volume-9ad5c57b-2a4b-4bc2-9226-02de3a045010": Phase="Pending", Reason="", readiness=false. Elapsed: 15.402076774s
Jan 28 22:23:17.717: INFO: Pod "downwardapi-volume-9ad5c57b-2a4b-4bc2-9226-02de3a045010": Phase="Pending", Reason="", readiness=false. Elapsed: 17.434400201s
Jan 28 22:23:20.066: INFO: Pod "downwardapi-volume-9ad5c57b-2a4b-4bc2-9226-02de3a045010": Phase="Pending", Reason="", readiness=false. Elapsed: 19.783957261s
Jan 28 22:23:22.570: INFO: Pod "downwardapi-volume-9ad5c57b-2a4b-4bc2-9226-02de3a045010": Phase="Pending", Reason="", readiness=false. Elapsed: 22.287265897s
Jan 28 22:23:24.656: INFO: Pod "downwardapi-volume-9ad5c57b-2a4b-4bc2-9226-02de3a045010": Phase="Pending", Reason="", readiness=false. Elapsed: 24.373924166s
Jan 28 22:23:26.989: INFO: Pod "downwardapi-volume-9ad5c57b-2a4b-4bc2-9226-02de3a045010": Phase="Pending", Reason="", readiness=false. Elapsed: 26.706551743s
Jan 28 22:23:29.671: INFO: Pod "downwardapi-volume-9ad5c57b-2a4b-4bc2-9226-02de3a045010": Phase="Pending", Reason="", readiness=false. Elapsed: 29.388741864s
Jan 28 22:23:32.149: INFO: Pod "downwardapi-volume-9ad5c57b-2a4b-4bc2-9226-02de3a045010": Phase="Pending", Reason="", readiness=false. Elapsed: 31.86604471s
Jan 28 22:23:34.155: INFO: Pod "downwardapi-volume-9ad5c57b-2a4b-4bc2-9226-02de3a045010": Phase="Pending", Reason="", readiness=false. Elapsed: 33.872810079s
Jan 28 22:23:36.165: INFO: Pod "downwardapi-volume-9ad5c57b-2a4b-4bc2-9226-02de3a045010": Phase="Pending", Reason="", readiness=false. Elapsed: 35.882539189s
Jan 28 22:23:38.172: INFO: Pod "downwardapi-volume-9ad5c57b-2a4b-4bc2-9226-02de3a045010": Phase="Pending", Reason="", readiness=false. Elapsed: 37.889166947s
Jan 28 22:23:40.178: INFO: Pod "downwardapi-volume-9ad5c57b-2a4b-4bc2-9226-02de3a045010": Phase="Pending", Reason="", readiness=false. Elapsed: 39.895453359s
Jan 28 22:23:42.183: INFO: Pod "downwardapi-volume-9ad5c57b-2a4b-4bc2-9226-02de3a045010": Phase="Pending", Reason="", readiness=false. Elapsed: 41.900473127s
Jan 28 22:23:44.191: INFO: Pod "downwardapi-volume-9ad5c57b-2a4b-4bc2-9226-02de3a045010": Phase="Pending", Reason="", readiness=false. Elapsed: 43.908201736s
Jan 28 22:23:46.199: INFO: Pod "downwardapi-volume-9ad5c57b-2a4b-4bc2-9226-02de3a045010": Phase="Pending", Reason="", readiness=false. Elapsed: 45.916575426s
Jan 28 22:23:48.207: INFO: Pod "downwardapi-volume-9ad5c57b-2a4b-4bc2-9226-02de3a045010": Phase="Succeeded", Reason="", readiness=false. Elapsed: 47.924321476s
STEP: Saw pod success
Jan 28 22:23:48.207: INFO: Pod "downwardapi-volume-9ad5c57b-2a4b-4bc2-9226-02de3a045010" satisfied condition "success or failure"
Jan 28 22:23:48.212: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-9ad5c57b-2a4b-4bc2-9226-02de3a045010 container client-container: 
STEP: delete the pod
Jan 28 22:23:48.301: INFO: Waiting for pod downwardapi-volume-9ad5c57b-2a4b-4bc2-9226-02de3a045010 to disappear
Jan 28 22:23:48.313: INFO: Pod downwardapi-volume-9ad5c57b-2a4b-4bc2-9226-02de3a045010 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:23:48.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-435" for this suite.

• [SLOW TEST:53.743 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":170,"skipped":2659,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:23:48.330: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-configmap-v5d7
STEP: Creating a pod to test atomic-volume-subpath
Jan 28 22:23:48.468: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-v5d7" in namespace "subpath-5347" to be "success or failure"
Jan 28 22:23:48.537: INFO: Pod "pod-subpath-test-configmap-v5d7": Phase="Pending", Reason="", readiness=false. Elapsed: 68.815355ms
Jan 28 22:23:50.546: INFO: Pod "pod-subpath-test-configmap-v5d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077609775s
Jan 28 22:23:52.553: INFO: Pod "pod-subpath-test-configmap-v5d7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.084600809s
Jan 28 22:23:54.564: INFO: Pod "pod-subpath-test-configmap-v5d7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.095808345s
Jan 28 22:23:56.583: INFO: Pod "pod-subpath-test-configmap-v5d7": Phase="Running", Reason="", readiness=true. Elapsed: 8.114870477s
Jan 28 22:23:58.591: INFO: Pod "pod-subpath-test-configmap-v5d7": Phase="Running", Reason="", readiness=true. Elapsed: 10.122286319s
Jan 28 22:24:00.597: INFO: Pod "pod-subpath-test-configmap-v5d7": Phase="Running", Reason="", readiness=true. Elapsed: 12.128401112s
Jan 28 22:24:02.608: INFO: Pod "pod-subpath-test-configmap-v5d7": Phase="Running", Reason="", readiness=true. Elapsed: 14.139250027s
Jan 28 22:24:04.613: INFO: Pod "pod-subpath-test-configmap-v5d7": Phase="Running", Reason="", readiness=true. Elapsed: 16.144215144s
Jan 28 22:24:06.617: INFO: Pod "pod-subpath-test-configmap-v5d7": Phase="Running", Reason="", readiness=true. Elapsed: 18.148819039s
Jan 28 22:24:08.626: INFO: Pod "pod-subpath-test-configmap-v5d7": Phase="Running", Reason="", readiness=true. Elapsed: 20.157281948s
Jan 28 22:24:10.639: INFO: Pod "pod-subpath-test-configmap-v5d7": Phase="Running", Reason="", readiness=true. Elapsed: 22.170077434s
Jan 28 22:24:12.645: INFO: Pod "pod-subpath-test-configmap-v5d7": Phase="Running", Reason="", readiness=true. Elapsed: 24.176460249s
Jan 28 22:24:14.651: INFO: Pod "pod-subpath-test-configmap-v5d7": Phase="Running", Reason="", readiness=true. Elapsed: 26.183036638s
Jan 28 22:24:16.660: INFO: Pod "pod-subpath-test-configmap-v5d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.191857814s
STEP: Saw pod success
Jan 28 22:24:16.660: INFO: Pod "pod-subpath-test-configmap-v5d7" satisfied condition "success or failure"
Jan 28 22:24:16.663: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-configmap-v5d7 container test-container-subpath-configmap-v5d7: 
STEP: delete the pod
Jan 28 22:24:16.748: INFO: Waiting for pod pod-subpath-test-configmap-v5d7 to disappear
Jan 28 22:24:16.759: INFO: Pod pod-subpath-test-configmap-v5d7 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-v5d7
Jan 28 22:24:16.759: INFO: Deleting pod "pod-subpath-test-configmap-v5d7" in namespace "subpath-5347"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:24:16.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-5347" for this suite.

• [SLOW TEST:28.439 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":171,"skipped":2670,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:24:16.770: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating Agnhost RC
Jan 28 22:24:16.843: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8856'
Jan 28 22:24:19.697: INFO: stderr: ""
Jan 28 22:24:19.697: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Jan 28 22:24:20.706: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 28 22:24:20.706: INFO: Found 0 / 1
Jan 28 22:24:21.752: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 28 22:24:21.753: INFO: Found 0 / 1
Jan 28 22:24:22.709: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 28 22:24:22.709: INFO: Found 0 / 1
Jan 28 22:24:23.710: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 28 22:24:23.711: INFO: Found 0 / 1
Jan 28 22:24:24.706: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 28 22:24:24.706: INFO: Found 0 / 1
Jan 28 22:24:25.708: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 28 22:24:25.709: INFO: Found 0 / 1
Jan 28 22:24:26.710: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 28 22:24:26.711: INFO: Found 1 / 1
Jan 28 22:24:26.711: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Jan 28 22:24:26.717: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 28 22:24:26.717: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan 28 22:24:26.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-vk49f --namespace=kubectl-8856 -p {"metadata":{"annotations":{"x":"y"}}}'
Jan 28 22:24:26.904: INFO: stderr: ""
Jan 28 22:24:26.904: INFO: stdout: "pod/agnhost-master-vk49f patched\n"
STEP: checking annotations
Jan 28 22:24:26.918: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 28 22:24:26.918: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:24:26.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8856" for this suite.

• [SLOW TEST:10.156 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1519
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":278,"completed":172,"skipped":2728,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:24:26.927: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating secret secrets-3961/secret-test-22d218ce-24d0-412f-9734-eee7fb9033f4
STEP: Creating a pod to test consume secrets
Jan 28 22:24:27.095: INFO: Waiting up to 5m0s for pod "pod-configmaps-1b184b4f-1779-40cb-a412-0fff7199fa01" in namespace "secrets-3961" to be "success or failure"
Jan 28 22:24:27.099: INFO: Pod "pod-configmaps-1b184b4f-1779-40cb-a412-0fff7199fa01": Phase="Pending", Reason="", readiness=false. Elapsed: 3.184589ms
Jan 28 22:24:29.106: INFO: Pod "pod-configmaps-1b184b4f-1779-40cb-a412-0fff7199fa01": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010467223s
Jan 28 22:24:31.112: INFO: Pod "pod-configmaps-1b184b4f-1779-40cb-a412-0fff7199fa01": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016956468s
Jan 28 22:24:33.119: INFO: Pod "pod-configmaps-1b184b4f-1779-40cb-a412-0fff7199fa01": Phase="Pending", Reason="", readiness=false. Elapsed: 6.02317676s
Jan 28 22:24:35.131: INFO: Pod "pod-configmaps-1b184b4f-1779-40cb-a412-0fff7199fa01": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.035978225s
STEP: Saw pod success
Jan 28 22:24:35.132: INFO: Pod "pod-configmaps-1b184b4f-1779-40cb-a412-0fff7199fa01" satisfied condition "success or failure"
Jan 28 22:24:35.136: INFO: Trying to get logs from node jerma-node pod pod-configmaps-1b184b4f-1779-40cb-a412-0fff7199fa01 container env-test: 
STEP: delete the pod
Jan 28 22:24:35.247: INFO: Waiting for pod pod-configmaps-1b184b4f-1779-40cb-a412-0fff7199fa01 to disappear
Jan 28 22:24:35.258: INFO: Pod pod-configmaps-1b184b4f-1779-40cb-a412-0fff7199fa01 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:24:35.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3961" for this suite.

• [SLOW TEST:8.344 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":173,"skipped":2771,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:24:35.272: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a ResourceQuota with best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a best-effort pod
STEP: Ensuring resource quota with best effort scope captures the pod usage
STEP: Ensuring resource quota with not best effort ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a not best-effort pod
STEP: Ensuring resource quota with not best effort scope captures the pod usage
STEP: Ensuring resource quota with best effort scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:24:51.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-9641" for this suite.

• [SLOW TEST:16.647 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":174,"skipped":2790,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:24:51.920: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 28 22:24:52.108: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Jan 28 22:24:55.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-16 create -f -'
Jan 28 22:24:57.578: INFO: stderr: ""
Jan 28 22:24:57.579: INFO: stdout: "e2e-test-crd-publish-openapi-3155-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Jan 28 22:24:57.579: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-16 delete e2e-test-crd-publish-openapi-3155-crds test-cr'
Jan 28 22:24:57.747: INFO: stderr: ""
Jan 28 22:24:57.747: INFO: stdout: "e2e-test-crd-publish-openapi-3155-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
Jan 28 22:24:57.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-16 apply -f -'
Jan 28 22:24:58.116: INFO: stderr: ""
Jan 28 22:24:58.116: INFO: stdout: "e2e-test-crd-publish-openapi-3155-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Jan 28 22:24:58.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-16 delete e2e-test-crd-publish-openapi-3155-crds test-cr'
Jan 28 22:24:58.231: INFO: stderr: ""
Jan 28 22:24:58.231: INFO: stdout: "e2e-test-crd-publish-openapi-3155-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Jan 28 22:24:58.231: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3155-crds'
Jan 28 22:24:58.612: INFO: stderr: ""
Jan 28 22:24:58.612: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-3155-crd\nVERSION:  crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:25:02.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-16" for this suite.

• [SLOW TEST:10.263 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":175,"skipped":2807,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:25:02.185: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Jan 28 22:25:02.272: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:25:14.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-6699" for this suite.

• [SLOW TEST:12.304 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":176,"skipped":2837,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:25:14.491: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Jan 28 22:25:14.587: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 28 22:25:14.599: INFO: Waiting for terminating namespaces to be deleted...
Jan 28 22:25:14.602: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Jan 28 22:25:14.613: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Jan 28 22:25:14.613: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 28 22:25:14.613: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Jan 28 22:25:14.613: INFO: 	Container weave ready: true, restart count 1
Jan 28 22:25:14.614: INFO: 	Container weave-npc ready: true, restart count 0
Jan 28 22:25:14.614: INFO: pod-init-42ebde02-32de-49b1-b7f9-3472fd6dd12a from init-container-6699 started at 2020-01-28 22:25:02 +0000 UTC (1 container statuses recorded)
Jan 28 22:25:14.614: INFO: 	Container run1 ready: false, restart count 0
Jan 28 22:25:14.614: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Jan 28 22:25:14.637: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Jan 28 22:25:14.637: INFO: 	Container weave ready: true, restart count 0
Jan 28 22:25:14.637: INFO: 	Container weave-npc ready: true, restart count 0
Jan 28 22:25:14.637: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 28 22:25:14.637: INFO: 	Container kube-controller-manager ready: true, restart count 3
Jan 28 22:25:14.637: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Jan 28 22:25:14.637: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 28 22:25:14.637: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 28 22:25:14.637: INFO: 	Container kube-scheduler ready: true, restart count 4
Jan 28 22:25:14.637: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 28 22:25:14.637: INFO: 	Container kube-apiserver ready: true, restart count 1
Jan 28 22:25:14.637: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 28 22:25:14.637: INFO: 	Container etcd ready: true, restart count 1
Jan 28 22:25:14.637: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 28 22:25:14.637: INFO: 	Container coredns ready: true, restart count 0
Jan 28 22:25:14.637: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 28 22:25:14.637: INFO: 	Container coredns ready: true, restart count 0
[It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-4fe87420-6dfa-48fc-b3ae-b5197114e8ac 90
STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled
STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled
STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides
STEP: removing the label kubernetes.io/e2e-4fe87420-6dfa-48fc-b3ae-b5197114e8ac off the node jerma-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-4fe87420-6dfa-48fc-b3ae-b5197114e8ac
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:25:47.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-3367" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:32.568 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":177,"skipped":2855,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with privileged 
  should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:25:47.060: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 28 22:25:47.131: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-2a26a090-646e-43e5-a81e-31de3abb5283" in namespace "security-context-test-5406" to be "success or failure"
Jan 28 22:25:47.170: INFO: Pod "busybox-privileged-false-2a26a090-646e-43e5-a81e-31de3abb5283": Phase="Pending", Reason="", readiness=false. Elapsed: 39.011029ms
Jan 28 22:25:49.176: INFO: Pod "busybox-privileged-false-2a26a090-646e-43e5-a81e-31de3abb5283": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044435595s
Jan 28 22:25:51.182: INFO: Pod "busybox-privileged-false-2a26a090-646e-43e5-a81e-31de3abb5283": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050492739s
Jan 28 22:25:53.192: INFO: Pod "busybox-privileged-false-2a26a090-646e-43e5-a81e-31de3abb5283": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061171695s
Jan 28 22:25:55.202: INFO: Pod "busybox-privileged-false-2a26a090-646e-43e5-a81e-31de3abb5283": Phase="Pending", Reason="", readiness=false. Elapsed: 8.070956329s
Jan 28 22:25:57.213: INFO: Pod "busybox-privileged-false-2a26a090-646e-43e5-a81e-31de3abb5283": Phase="Pending", Reason="", readiness=false. Elapsed: 10.081872211s
Jan 28 22:25:59.219: INFO: Pod "busybox-privileged-false-2a26a090-646e-43e5-a81e-31de3abb5283": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.087911544s
Jan 28 22:25:59.219: INFO: Pod "busybox-privileged-false-2a26a090-646e-43e5-a81e-31de3abb5283" satisfied condition "success or failure"
Jan 28 22:25:59.228: INFO: Got logs for pod "busybox-privileged-false-2a26a090-646e-43e5-a81e-31de3abb5283": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:25:59.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-5406" for this suite.

• [SLOW TEST:12.177 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  When creating a pod with privileged
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:225
    should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":178,"skipped":2875,"failed":0}
SSSSSS
------------------------------
[sig-network] DNS 
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:25:59.237: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4391 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4391;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4391 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4391;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4391.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4391.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4391.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4391.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4391.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-4391.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4391.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-4391.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4391.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-4391.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4391.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-4391.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4391.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 177.202.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.202.177_udp@PTR;check="$$(dig +tcp +noall +answer +search 177.202.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.202.177_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4391 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4391;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4391 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4391;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4391.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4391.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4391.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4391.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4391.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-4391.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4391.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-4391.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4391.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-4391.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4391.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-4391.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4391.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 177.202.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.202.177_udp@PTR;check="$$(dig +tcp +noall +answer +search 177.202.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.202.177_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 28 22:26:13.513: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:13.518: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:13.523: INFO: Unable to read wheezy_udp@dns-test-service.dns-4391 from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:13.526: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4391 from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:13.530: INFO: Unable to read wheezy_udp@dns-test-service.dns-4391.svc from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:13.533: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4391.svc from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:13.536: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4391.svc from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:13.540: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4391.svc from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:13.567: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:13.570: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:13.573: INFO: Unable to read jessie_udp@dns-test-service.dns-4391 from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:13.576: INFO: Unable to read jessie_tcp@dns-test-service.dns-4391 from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:13.580: INFO: Unable to read jessie_udp@dns-test-service.dns-4391.svc from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:13.582: INFO: Unable to read jessie_tcp@dns-test-service.dns-4391.svc from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:13.585: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4391.svc from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:13.588: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4391.svc from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:13.615: INFO: Lookups using dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4391 wheezy_tcp@dns-test-service.dns-4391 wheezy_udp@dns-test-service.dns-4391.svc wheezy_tcp@dns-test-service.dns-4391.svc wheezy_udp@_http._tcp.dns-test-service.dns-4391.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4391.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4391 jessie_tcp@dns-test-service.dns-4391 jessie_udp@dns-test-service.dns-4391.svc jessie_tcp@dns-test-service.dns-4391.svc jessie_udp@_http._tcp.dns-test-service.dns-4391.svc jessie_tcp@_http._tcp.dns-test-service.dns-4391.svc]

Jan 28 22:26:18.629: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:18.638: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:18.644: INFO: Unable to read wheezy_udp@dns-test-service.dns-4391 from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:18.648: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4391 from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:18.653: INFO: Unable to read wheezy_udp@dns-test-service.dns-4391.svc from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:18.661: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4391.svc from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:18.666: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4391.svc from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:18.674: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4391.svc from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:18.725: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:18.732: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:18.737: INFO: Unable to read jessie_udp@dns-test-service.dns-4391 from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:18.745: INFO: Unable to read jessie_tcp@dns-test-service.dns-4391 from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:18.751: INFO: Unable to read jessie_udp@dns-test-service.dns-4391.svc from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:18.758: INFO: Unable to read jessie_tcp@dns-test-service.dns-4391.svc from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:18.769: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4391.svc from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:18.773: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4391.svc from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:18.800: INFO: Lookups using dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4391 wheezy_tcp@dns-test-service.dns-4391 wheezy_udp@dns-test-service.dns-4391.svc wheezy_tcp@dns-test-service.dns-4391.svc wheezy_udp@_http._tcp.dns-test-service.dns-4391.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4391.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4391 jessie_tcp@dns-test-service.dns-4391 jessie_udp@dns-test-service.dns-4391.svc jessie_tcp@dns-test-service.dns-4391.svc jessie_udp@_http._tcp.dns-test-service.dns-4391.svc jessie_tcp@_http._tcp.dns-test-service.dns-4391.svc]

Jan 28 22:26:23.625: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:23.630: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:23.635: INFO: Unable to read wheezy_udp@dns-test-service.dns-4391 from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:23.639: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4391 from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:23.643: INFO: Unable to read wheezy_udp@dns-test-service.dns-4391.svc from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:23.645: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4391.svc from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:23.648: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4391.svc from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:23.651: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4391.svc from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:23.690: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:23.694: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:23.698: INFO: Unable to read jessie_udp@dns-test-service.dns-4391 from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:23.701: INFO: Unable to read jessie_tcp@dns-test-service.dns-4391 from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:23.704: INFO: Unable to read jessie_udp@dns-test-service.dns-4391.svc from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:23.707: INFO: Unable to read jessie_tcp@dns-test-service.dns-4391.svc from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:23.710: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4391.svc from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:23.713: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4391.svc from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:23.738: INFO: Lookups using dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4391 wheezy_tcp@dns-test-service.dns-4391 wheezy_udp@dns-test-service.dns-4391.svc wheezy_tcp@dns-test-service.dns-4391.svc wheezy_udp@_http._tcp.dns-test-service.dns-4391.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4391.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4391 jessie_tcp@dns-test-service.dns-4391 jessie_udp@dns-test-service.dns-4391.svc jessie_tcp@dns-test-service.dns-4391.svc jessie_udp@_http._tcp.dns-test-service.dns-4391.svc jessie_tcp@_http._tcp.dns-test-service.dns-4391.svc]

Jan 28 22:26:28.665: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:28.682: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:28.691: INFO: Unable to read wheezy_udp@dns-test-service.dns-4391 from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:28.705: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4391 from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:28.732: INFO: Unable to read wheezy_udp@dns-test-service.dns-4391.svc from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:28.738: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4391.svc from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:28.743: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4391.svc from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:28.751: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4391.svc from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:28.821: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:28.825: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:28.830: INFO: Unable to read jessie_udp@dns-test-service.dns-4391 from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:28.837: INFO: Unable to read jessie_tcp@dns-test-service.dns-4391 from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:28.843: INFO: Unable to read jessie_udp@dns-test-service.dns-4391.svc from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:28.851: INFO: Unable to read jessie_tcp@dns-test-service.dns-4391.svc from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:28.860: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4391.svc from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:28.869: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4391.svc from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:28.920: INFO: Lookups using dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4391 wheezy_tcp@dns-test-service.dns-4391 wheezy_udp@dns-test-service.dns-4391.svc wheezy_tcp@dns-test-service.dns-4391.svc wheezy_udp@_http._tcp.dns-test-service.dns-4391.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4391.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4391 jessie_tcp@dns-test-service.dns-4391 jessie_udp@dns-test-service.dns-4391.svc jessie_tcp@dns-test-service.dns-4391.svc jessie_udp@_http._tcp.dns-test-service.dns-4391.svc jessie_tcp@_http._tcp.dns-test-service.dns-4391.svc]

Jan 28 22:26:33.626: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:33.633: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:33.639: INFO: Unable to read wheezy_udp@dns-test-service.dns-4391 from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:33.644: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4391 from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:33.648: INFO: Unable to read wheezy_udp@dns-test-service.dns-4391.svc from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:33.654: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4391.svc from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:33.660: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4391.svc from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:33.664: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4391.svc from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:33.697: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:33.702: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:33.707: INFO: Unable to read jessie_udp@dns-test-service.dns-4391 from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:33.713: INFO: Unable to read jessie_tcp@dns-test-service.dns-4391 from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:33.716: INFO: Unable to read jessie_udp@dns-test-service.dns-4391.svc from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:33.720: INFO: Unable to read jessie_tcp@dns-test-service.dns-4391.svc from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:33.724: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4391.svc from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:33.728: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4391.svc from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:33.749: INFO: Lookups using dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4391 wheezy_tcp@dns-test-service.dns-4391 wheezy_udp@dns-test-service.dns-4391.svc wheezy_tcp@dns-test-service.dns-4391.svc wheezy_udp@_http._tcp.dns-test-service.dns-4391.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4391.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4391 jessie_tcp@dns-test-service.dns-4391 jessie_udp@dns-test-service.dns-4391.svc jessie_tcp@dns-test-service.dns-4391.svc jessie_udp@_http._tcp.dns-test-service.dns-4391.svc jessie_tcp@_http._tcp.dns-test-service.dns-4391.svc]

Jan 28 22:26:38.622: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:38.628: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:38.633: INFO: Unable to read wheezy_udp@dns-test-service.dns-4391 from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:38.637: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4391 from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:38.642: INFO: Unable to read wheezy_udp@dns-test-service.dns-4391.svc from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:38.646: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4391.svc from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:38.650: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4391.svc from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:38.655: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4391.svc from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:38.686: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:38.693: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:38.700: INFO: Unable to read jessie_udp@dns-test-service.dns-4391 from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:38.705: INFO: Unable to read jessie_tcp@dns-test-service.dns-4391 from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:38.713: INFO: Unable to read jessie_udp@dns-test-service.dns-4391.svc from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:38.719: INFO: Unable to read jessie_tcp@dns-test-service.dns-4391.svc from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:38.724: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4391.svc from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:38.729: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4391.svc from pod dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7: the server could not find the requested resource (get pods dns-test-ec444801-e319-4e82-87a6-b9d9707825c7)
Jan 28 22:26:38.763: INFO: Lookups using dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4391 wheezy_tcp@dns-test-service.dns-4391 wheezy_udp@dns-test-service.dns-4391.svc wheezy_tcp@dns-test-service.dns-4391.svc wheezy_udp@_http._tcp.dns-test-service.dns-4391.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4391.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4391 jessie_tcp@dns-test-service.dns-4391 jessie_udp@dns-test-service.dns-4391.svc jessie_tcp@dns-test-service.dns-4391.svc jessie_udp@_http._tcp.dns-test-service.dns-4391.svc jessie_tcp@_http._tcp.dns-test-service.dns-4391.svc]

Jan 28 22:26:43.752: INFO: DNS probes using dns-4391/dns-test-ec444801-e319-4e82-87a6-b9d9707825c7 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:26:44.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-4391" for this suite.

• [SLOW TEST:45.066 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":179,"skipped":2881,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:26:44.309: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-1724
[It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating stateful set ss in namespace statefulset-1724
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-1724
Jan 28 22:26:44.548: INFO: Found 0 stateful pods, waiting for 1
Jan 28 22:26:54.565: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false
Jan 28 22:27:04.558: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Jan 28 22:27:04.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1724 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 28 22:27:04.973: INFO: stderr: "I0128 22:27:04.753682    3833 log.go:172] (0xc000104b00) (0xc0006e7ae0) Create stream\nI0128 22:27:04.753855    3833 log.go:172] (0xc000104b00) (0xc0006e7ae0) Stream added, broadcasting: 1\nI0128 22:27:04.758806    3833 log.go:172] (0xc000104b00) Reply frame received for 1\nI0128 22:27:04.758868    3833 log.go:172] (0xc000104b00) (0xc0006e7cc0) Create stream\nI0128 22:27:04.758881    3833 log.go:172] (0xc000104b00) (0xc0006e7cc0) Stream added, broadcasting: 3\nI0128 22:27:04.760516    3833 log.go:172] (0xc000104b00) Reply frame received for 3\nI0128 22:27:04.760539    3833 log.go:172] (0xc000104b00) (0xc00094a000) Create stream\nI0128 22:27:04.760554    3833 log.go:172] (0xc000104b00) (0xc00094a000) Stream added, broadcasting: 5\nI0128 22:27:04.761985    3833 log.go:172] (0xc000104b00) Reply frame received for 5\nI0128 22:27:04.835116    3833 log.go:172] (0xc000104b00) Data frame received for 5\nI0128 22:27:04.835277    3833 log.go:172] (0xc00094a000) (5) Data frame handling\nI0128 22:27:04.835306    3833 log.go:172] (0xc00094a000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0128 22:27:04.863820    3833 log.go:172] (0xc000104b00) Data frame received for 3\nI0128 22:27:04.863902    3833 log.go:172] (0xc0006e7cc0) (3) Data frame handling\nI0128 22:27:04.863936    3833 log.go:172] (0xc0006e7cc0) (3) Data frame sent\nI0128 22:27:04.960930    3833 log.go:172] (0xc000104b00) Data frame received for 1\nI0128 22:27:04.961020    3833 log.go:172] (0xc0006e7ae0) (1) Data frame handling\nI0128 22:27:04.961052    3833 log.go:172] (0xc0006e7ae0) (1) Data frame sent\nI0128 22:27:04.961234    3833 log.go:172] (0xc000104b00) (0xc0006e7ae0) Stream removed, broadcasting: 1\nI0128 22:27:04.962661    3833 log.go:172] (0xc000104b00) (0xc0006e7cc0) Stream removed, broadcasting: 3\nI0128 22:27:04.962819    3833 log.go:172] (0xc000104b00) (0xc00094a000) Stream removed, broadcasting: 5\nI0128 22:27:04.962865    3833 log.go:172] (0xc000104b00) Go away received\nI0128 22:27:04.962942    3833 log.go:172] (0xc000104b00) (0xc0006e7ae0) Stream removed, broadcasting: 1\nI0128 22:27:04.962993    3833 log.go:172] (0xc000104b00) (0xc0006e7cc0) Stream removed, broadcasting: 3\nI0128 22:27:04.963014    3833 log.go:172] (0xc000104b00) (0xc00094a000) Stream removed, broadcasting: 5\n"
Jan 28 22:27:04.973: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 28 22:27:04.973: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 28 22:27:04.978: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jan 28 22:27:14.984: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 28 22:27:14.985: INFO: Waiting for statefulset status.replicas updated to 0
Jan 28 22:27:15.011: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Jan 28 22:27:15.011: INFO: ss-0  jerma-node  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:26:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:05 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:26:44 +0000 UTC  }]
Jan 28 22:27:15.011: INFO: 
Jan 28 22:27:15.011: INFO: StatefulSet ss has not reached scale 3, at 1
Jan 28 22:27:16.149: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.990905078s
Jan 28 22:27:17.249: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.852522539s
Jan 28 22:27:18.261: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.752246472s
Jan 28 22:27:19.269: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.739998853s
Jan 28 22:27:20.872: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.732567728s
Jan 28 22:27:22.647: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.129056905s
Jan 28 22:27:23.655: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.353988998s
Jan 28 22:27:24.700: INFO: Verifying statefulset ss doesn't scale past 3 for another 346.443066ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-1724
Jan 28 22:27:25.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1724 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 28 22:27:26.119: INFO: stderr: "I0128 22:27:25.941369    3854 log.go:172] (0xc000020dc0) (0xc000713e00) Create stream\nI0128 22:27:25.941599    3854 log.go:172] (0xc000020dc0) (0xc000713e00) Stream added, broadcasting: 1\nI0128 22:27:25.945106    3854 log.go:172] (0xc000020dc0) Reply frame received for 1\nI0128 22:27:25.945310    3854 log.go:172] (0xc000020dc0) (0xc0006c86e0) Create stream\nI0128 22:27:25.945331    3854 log.go:172] (0xc000020dc0) (0xc0006c86e0) Stream added, broadcasting: 3\nI0128 22:27:25.947431    3854 log.go:172] (0xc000020dc0) Reply frame received for 3\nI0128 22:27:25.947465    3854 log.go:172] (0xc000020dc0) (0xc0002a74a0) Create stream\nI0128 22:27:25.947480    3854 log.go:172] (0xc000020dc0) (0xc0002a74a0) Stream added, broadcasting: 5\nI0128 22:27:25.949403    3854 log.go:172] (0xc000020dc0) Reply frame received for 5\nI0128 22:27:26.031813    3854 log.go:172] (0xc000020dc0) Data frame received for 3\nI0128 22:27:26.031887    3854 log.go:172] (0xc0006c86e0) (3) Data frame handling\nI0128 22:27:26.031907    3854 log.go:172] (0xc0006c86e0) (3) Data frame sent\nI0128 22:27:26.031961    3854 log.go:172] (0xc000020dc0) Data frame received for 5\nI0128 22:27:26.031972    3854 log.go:172] (0xc0002a74a0) (5) Data frame handling\nI0128 22:27:26.031983    3854 log.go:172] (0xc0002a74a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0128 22:27:26.105437    3854 log.go:172] (0xc000020dc0) Data frame received for 1\nI0128 22:27:26.105605    3854 log.go:172] (0xc000020dc0) (0xc0002a74a0) Stream removed, broadcasting: 5\nI0128 22:27:26.105688    3854 log.go:172] (0xc000713e00) (1) Data frame handling\nI0128 22:27:26.105724    3854 log.go:172] (0xc000713e00) (1) Data frame sent\nI0128 22:27:26.105784    3854 log.go:172] (0xc000020dc0) (0xc0006c86e0) Stream removed, broadcasting: 3\nI0128 22:27:26.105835    3854 log.go:172] (0xc000020dc0) (0xc000713e00) Stream removed, broadcasting: 1\nI0128 22:27:26.105849    3854 log.go:172] (0xc000020dc0) Go away received\nI0128 22:27:26.106852    3854 log.go:172] (0xc000020dc0) (0xc000713e00) Stream removed, broadcasting: 1\nI0128 22:27:26.106876    3854 log.go:172] (0xc000020dc0) (0xc0006c86e0) Stream removed, broadcasting: 3\nI0128 22:27:26.106901    3854 log.go:172] (0xc000020dc0) (0xc0002a74a0) Stream removed, broadcasting: 5\n"
Jan 28 22:27:26.120: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 28 22:27:26.120: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 28 22:27:26.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1724 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 28 22:27:26.589: INFO: stderr: "I0128 22:27:26.356793    3876 log.go:172] (0xc000742790) (0xc00073a1e0) Create stream\nI0128 22:27:26.357024    3876 log.go:172] (0xc000742790) (0xc00073a1e0) Stream added, broadcasting: 1\nI0128 22:27:26.362859    3876 log.go:172] (0xc000742790) Reply frame received for 1\nI0128 22:27:26.363104    3876 log.go:172] (0xc000742790) (0xc0006839a0) Create stream\nI0128 22:27:26.363152    3876 log.go:172] (0xc000742790) (0xc0006839a0) Stream added, broadcasting: 3\nI0128 22:27:26.364909    3876 log.go:172] (0xc000742790) Reply frame received for 3\nI0128 22:27:26.364984    3876 log.go:172] (0xc000742790) (0xc000523360) Create stream\nI0128 22:27:26.364995    3876 log.go:172] (0xc000742790) (0xc000523360) Stream added, broadcasting: 5\nI0128 22:27:26.365792    3876 log.go:172] (0xc000742790) Reply frame received for 5\nI0128 22:27:26.423814    3876 log.go:172] (0xc000742790) Data frame received for 5\nI0128 22:27:26.423904    3876 log.go:172] (0xc000523360) (5) Data frame handling\nI0128 22:27:26.423923    3876 log.go:172] (0xc000523360) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0128 22:27:26.425901    3876 log.go:172] (0xc000742790) Data frame received for 3\nI0128 22:27:26.425920    3876 log.go:172] (0xc0006839a0) (3) Data frame handling\nI0128 22:27:26.425948    3876 log.go:172] (0xc0006839a0) (3) Data frame sent\nI0128 22:27:26.426326    3876 log.go:172] (0xc000742790) Data frame received for 5\nI0128 22:27:26.426340    3876 log.go:172] (0xc000523360) (5) Data frame handling\nI0128 22:27:26.426353    3876 log.go:172] (0xc000523360) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0128 22:27:26.561815    3876 log.go:172] (0xc000742790) (0xc000523360) Stream removed, broadcasting: 5\nI0128 22:27:26.562104    3876 log.go:172] (0xc000742790) Data frame received for 1\nI0128 22:27:26.562160    3876 log.go:172] (0xc000742790) (0xc0006839a0) Stream removed, broadcasting: 3\nI0128 22:27:26.562246    3876 log.go:172] (0xc00073a1e0) (1) Data frame handling\nI0128 22:27:26.562293    3876 log.go:172] (0xc00073a1e0) (1) Data frame sent\nI0128 22:27:26.562309    3876 log.go:172] (0xc000742790) (0xc00073a1e0) Stream removed, broadcasting: 1\nI0128 22:27:26.562337    3876 log.go:172] (0xc000742790) Go away received\nI0128 22:27:26.564253    3876 log.go:172] (0xc000742790) (0xc00073a1e0) Stream removed, broadcasting: 1\nI0128 22:27:26.564300    3876 log.go:172] (0xc000742790) (0xc0006839a0) Stream removed, broadcasting: 3\nI0128 22:27:26.564320    3876 log.go:172] (0xc000742790) (0xc000523360) Stream removed, broadcasting: 5\n"
Jan 28 22:27:26.590: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 28 22:27:26.590: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 28 22:27:26.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1724 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 28 22:27:27.022: INFO: stderr: "I0128 22:27:26.751342    3897 log.go:172] (0xc000566dc0) (0xc0006c5ea0) Create stream\nI0128 22:27:26.751508    3897 log.go:172] (0xc000566dc0) (0xc0006c5ea0) Stream added, broadcasting: 1\nI0128 22:27:26.755258    3897 log.go:172] (0xc000566dc0) Reply frame received for 1\nI0128 22:27:26.755331    3897 log.go:172] (0xc000566dc0) (0xc0005c4780) Create stream\nI0128 22:27:26.755341    3897 log.go:172] (0xc000566dc0) (0xc0005c4780) Stream added, broadcasting: 3\nI0128 22:27:26.757081    3897 log.go:172] (0xc000566dc0) Reply frame received for 3\nI0128 22:27:26.757109    3897 log.go:172] (0xc000566dc0) (0xc00072b540) Create stream\nI0128 22:27:26.757123    3897 log.go:172] (0xc000566dc0) (0xc00072b540) Stream added, broadcasting: 5\nI0128 22:27:26.758811    3897 log.go:172] (0xc000566dc0) Reply frame received for 5\nI0128 22:27:26.882473    3897 log.go:172] (0xc000566dc0) Data frame received for 5\nI0128 22:27:26.882724    3897 log.go:172] (0xc00072b540) (5) Data frame handling\nI0128 22:27:26.882750    3897 log.go:172] (0xc00072b540) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0128 22:27:26.882780    3897 log.go:172] (0xc000566dc0) Data frame received for 3\nI0128 22:27:26.882787    3897 log.go:172] (0xc0005c4780) (3) Data frame handling\nI0128 22:27:26.882794    3897 log.go:172] (0xc0005c4780) (3) Data frame sent\nI0128 22:27:27.006654    3897 log.go:172] (0xc000566dc0) (0xc0005c4780) Stream removed, broadcasting: 3\nI0128 22:27:27.006798    3897 log.go:172] (0xc000566dc0) Data frame received for 1\nI0128 22:27:27.006834    3897 log.go:172] (0xc0006c5ea0) (1) Data frame handling\nI0128 22:27:27.006858    3897 log.go:172] (0xc0006c5ea0) (1) Data frame sent\nI0128 22:27:27.006871    3897 log.go:172] (0xc000566dc0) (0xc0006c5ea0) Stream removed, broadcasting: 1\nI0128 22:27:27.006890    3897 log.go:172] (0xc000566dc0) (0xc00072b540) Stream removed, broadcasting: 5\nI0128 22:27:27.007046    3897 log.go:172] (0xc000566dc0) Go away received\nI0128 22:27:27.008221    3897 log.go:172] (0xc000566dc0) (0xc0006c5ea0) Stream removed, broadcasting: 1\nI0128 22:27:27.008238    3897 log.go:172] (0xc000566dc0) (0xc0005c4780) Stream removed, broadcasting: 3\nI0128 22:27:27.008243    3897 log.go:172] (0xc000566dc0) (0xc00072b540) Stream removed, broadcasting: 5\n"
Jan 28 22:27:27.022: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 28 22:27:27.022: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 28 22:27:27.074: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 28 22:27:27.074: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 28 22:27:27.074: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Jan 28 22:27:27.082: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1724 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 28 22:27:27.468: INFO: stderr: "I0128 22:27:27.323683    3916 log.go:172] (0xc000937b80) (0xc0009c2780) Create stream\nI0128 22:27:27.323865    3916 log.go:172] (0xc000937b80) (0xc0009c2780) Stream added, broadcasting: 1\nI0128 22:27:27.336426    3916 log.go:172] (0xc000937b80) Reply frame received for 1\nI0128 22:27:27.336506    3916 log.go:172] (0xc000937b80) (0xc000624500) Create stream\nI0128 22:27:27.336526    3916 log.go:172] (0xc000937b80) (0xc000624500) Stream added, broadcasting: 3\nI0128 22:27:27.337972    3916 log.go:172] (0xc000937b80) Reply frame received for 3\nI0128 22:27:27.338078    3916 log.go:172] (0xc000937b80) (0xc00025b900) Create stream\nI0128 22:27:27.338098    3916 log.go:172] (0xc000937b80) (0xc00025b900) Stream added, broadcasting: 5\nI0128 22:27:27.339051    3916 log.go:172] (0xc000937b80) Reply frame received for 5\nI0128 22:27:27.407225    3916 log.go:172] (0xc000937b80) Data frame received for 5\nI0128 22:27:27.407303    3916 log.go:172] (0xc00025b900) (5) Data frame handling\nI0128 22:27:27.407316    3916 log.go:172] (0xc00025b900) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0128 22:27:27.407343    3916 log.go:172] (0xc000937b80) Data frame received for 3\nI0128 22:27:27.407350    3916 log.go:172] (0xc000624500) (3) Data frame handling\nI0128 22:27:27.407361    3916 log.go:172] (0xc000624500) (3) Data frame sent\nI0128 22:27:27.458400    3916 log.go:172] (0xc000937b80) (0xc00025b900) Stream removed, broadcasting: 5\nI0128 22:27:27.458495    3916 log.go:172] (0xc000937b80) Data frame received for 1\nI0128 22:27:27.458524    3916 log.go:172] (0xc000937b80) (0xc000624500) Stream removed, broadcasting: 3\nI0128 22:27:27.458629    3916 log.go:172] (0xc0009c2780) (1) Data frame handling\nI0128 22:27:27.458662    3916 log.go:172] (0xc0009c2780) (1) Data frame sent\nI0128 22:27:27.458677    3916 log.go:172] (0xc000937b80) (0xc0009c2780) Stream removed, broadcasting: 1\nI0128 22:27:27.458693    3916 log.go:172] (0xc000937b80) Go away received\nI0128 22:27:27.459685    3916 log.go:172] (0xc000937b80) (0xc0009c2780) Stream removed, broadcasting: 1\nI0128 22:27:27.459701    3916 log.go:172] (0xc000937b80) (0xc000624500) Stream removed, broadcasting: 3\nI0128 22:27:27.459709    3916 log.go:172] (0xc000937b80) (0xc00025b900) Stream removed, broadcasting: 5\n"
Jan 28 22:27:27.468: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 28 22:27:27.468: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 28 22:27:27.468: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1724 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 28 22:27:27.772: INFO: stderr: "I0128 22:27:27.619911    3937 log.go:172] (0xc00094e0b0) (0xc00044f4a0) Create stream\nI0128 22:27:27.619998    3937 log.go:172] (0xc00094e0b0) (0xc00044f4a0) Stream added, broadcasting: 1\nI0128 22:27:27.627122    3937 log.go:172] (0xc00094e0b0) Reply frame received for 1\nI0128 22:27:27.627145    3937 log.go:172] (0xc00094e0b0) (0xc000924000) Create stream\nI0128 22:27:27.627153    3937 log.go:172] (0xc00094e0b0) (0xc000924000) Stream added, broadcasting: 3\nI0128 22:27:27.629183    3937 log.go:172] (0xc00094e0b0) Reply frame received for 3\nI0128 22:27:27.629200    3937 log.go:172] (0xc00094e0b0) (0xc0009cc000) Create stream\nI0128 22:27:27.629208    3937 log.go:172] (0xc00094e0b0) (0xc0009cc000) Stream added, broadcasting: 5\nI0128 22:27:27.630185    3937 log.go:172] (0xc00094e0b0) Reply frame received for 5\nI0128 22:27:27.682643    3937 log.go:172] (0xc00094e0b0) Data frame received for 5\nI0128 22:27:27.682669    3937 log.go:172] (0xc0009cc000) (5) Data frame handling\nI0128 22:27:27.682691    3937 log.go:172] (0xc0009cc000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0128 22:27:27.714281    3937 log.go:172] (0xc00094e0b0) Data frame received for 3\nI0128 22:27:27.714350    3937 log.go:172] (0xc000924000) (3) Data frame handling\nI0128 22:27:27.714366    3937 log.go:172] (0xc000924000) (3) Data frame sent\nI0128 22:27:27.763346    3937 log.go:172] (0xc00094e0b0) Data frame received for 1\nI0128 22:27:27.763411    3937 log.go:172] (0xc00044f4a0) (1) Data frame handling\nI0128 22:27:27.763427    3937 log.go:172] (0xc00044f4a0) (1) Data frame sent\nI0128 22:27:27.763632    3937 log.go:172] (0xc00094e0b0) (0xc00044f4a0) Stream removed, broadcasting: 1\nI0128 22:27:27.764477    3937 log.go:172] (0xc00094e0b0) (0xc000924000) Stream removed, broadcasting: 3\nI0128 22:27:27.765199    3937 log.go:172] (0xc00094e0b0) (0xc0009cc000) Stream removed, broadcasting: 5\nI0128 22:27:27.765225    3937 log.go:172] (0xc00094e0b0) (0xc00044f4a0) Stream removed, broadcasting: 1\nI0128 22:27:27.765230    3937 log.go:172] (0xc00094e0b0) (0xc000924000) Stream removed, broadcasting: 3\nI0128 22:27:27.765234    3937 log.go:172] (0xc00094e0b0) (0xc0009cc000) Stream removed, broadcasting: 5\n"
Jan 28 22:27:27.772: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 28 22:27:27.772: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 28 22:27:27.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1724 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 28 22:27:28.141: INFO: stderr: "I0128 22:27:27.929751    3959 log.go:172] (0xc000a10000) (0xc00076a000) Create stream\nI0128 22:27:27.930026    3959 log.go:172] (0xc000a10000) (0xc00076a000) Stream added, broadcasting: 1\nI0128 22:27:27.933983    3959 log.go:172] (0xc000a10000) Reply frame received for 1\nI0128 22:27:27.934065    3959 log.go:172] (0xc000a10000) (0xc0007c2000) Create stream\nI0128 22:27:27.934096    3959 log.go:172] (0xc000a10000) (0xc0007c2000) Stream added, broadcasting: 3\nI0128 22:27:27.935519    3959 log.go:172] (0xc000a10000) Reply frame received for 3\nI0128 22:27:27.935639    3959 log.go:172] (0xc000a10000) (0xc0007c20a0) Create stream\nI0128 22:27:27.935647    3959 log.go:172] (0xc000a10000) (0xc0007c20a0) Stream added, broadcasting: 5\nI0128 22:27:27.937145    3959 log.go:172] (0xc000a10000) Reply frame received for 5\nI0128 22:27:28.014746    3959 log.go:172] (0xc000a10000) Data frame received for 5\nI0128 22:27:28.014928    3959 log.go:172] (0xc0007c20a0) (5) Data frame handling\nI0128 22:27:28.014991    3959 log.go:172] (0xc0007c20a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0128 22:27:28.052224    3959 log.go:172] (0xc000a10000) Data frame received for 3\nI0128 22:27:28.052294    3959 log.go:172] (0xc0007c2000) (3) Data frame handling\nI0128 22:27:28.052313    3959 log.go:172] (0xc0007c2000) (3) Data frame sent\nI0128 22:27:28.123173    3959 log.go:172] (0xc000a10000) (0xc0007c20a0) Stream removed, broadcasting: 5\nI0128 22:27:28.123288    3959 log.go:172] (0xc000a10000) Data frame received for 1\nI0128 22:27:28.123336    3959 log.go:172] (0xc00076a000) (1) Data frame handling\nI0128 22:27:28.123375    3959 log.go:172] (0xc00076a000) (1) Data frame sent\nI0128 22:27:28.123424    3959 log.go:172] (0xc000a10000) (0xc0007c2000) Stream removed, broadcasting: 3\nI0128 22:27:28.123542    3959 log.go:172] (0xc000a10000) (0xc00076a000) Stream removed, broadcasting: 1\nI0128 22:27:28.123582    3959 log.go:172] (0xc000a10000) Go away received\nI0128 22:27:28.124928    3959 log.go:172] (0xc000a10000) (0xc00076a000) Stream removed, broadcasting: 1\nI0128 22:27:28.124945    3959 log.go:172] (0xc000a10000) (0xc0007c2000) Stream removed, broadcasting: 3\nI0128 22:27:28.124955    3959 log.go:172] (0xc000a10000) (0xc0007c20a0) Stream removed, broadcasting: 5\n"
Jan 28 22:27:28.141: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 28 22:27:28.141: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 28 22:27:28.141: INFO: Waiting for statefulset status.replicas updated to 0
Jan 28 22:27:28.167: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Jan 28 22:27:38.182: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 28 22:27:38.182: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan 28 22:27:38.182: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan 28 22:27:38.233: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 28 22:27:38.233: INFO: ss-0  jerma-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:26:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:26:44 +0000 UTC  }]
Jan 28 22:27:38.233: INFO: ss-1  jerma-server-mvvl6gufaqub  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:15 +0000 UTC  }]
Jan 28 22:27:38.233: INFO: ss-2  jerma-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:15 +0000 UTC  }]
Jan 28 22:27:38.233: INFO: 
Jan 28 22:27:38.233: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 28 22:27:40.102: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 28 22:27:40.102: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:26:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:26:44 +0000 UTC  }]
Jan 28 22:27:40.102: INFO: ss-1  jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:15 +0000 UTC  }]
Jan 28 22:27:40.102: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:15 +0000 UTC  }]
Jan 28 22:27:40.102: INFO: 
Jan 28 22:27:40.102: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 28 22:27:41.110: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 28 22:27:41.111: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:26:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:26:44 +0000 UTC  }]
Jan 28 22:27:41.111: INFO: ss-1  jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:15 +0000 UTC  }]
Jan 28 22:27:41.111: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:15 +0000 UTC  }]
Jan 28 22:27:41.111: INFO: 
Jan 28 22:27:41.111: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 28 22:27:42.235: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 28 22:27:42.235: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:26:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:26:44 +0000 UTC  }]
Jan 28 22:27:42.235: INFO: ss-1  jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:15 +0000 UTC  }]
Jan 28 22:27:42.235: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:15 +0000 UTC  }]
Jan 28 22:27:42.235: INFO: 
Jan 28 22:27:42.235: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 28 22:27:43.246: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 28 22:27:43.246: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:26:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:26:44 +0000 UTC  }]
Jan 28 22:27:43.246: INFO: ss-1  jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:15 +0000 UTC  }]
Jan 28 22:27:43.246: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:15 +0000 UTC  }]
Jan 28 22:27:43.246: INFO: 
Jan 28 22:27:43.246: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 28 22:27:44.259: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 28 22:27:44.259: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:26:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:26:44 +0000 UTC  }]
Jan 28 22:27:44.259: INFO: ss-1  jerma-server-mvvl6gufaqub  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:15 +0000 UTC  }]
Jan 28 22:27:44.259: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:15 +0000 UTC  }]
Jan 28 22:27:44.259: INFO: 
Jan 28 22:27:44.259: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 28 22:27:45.269: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 28 22:27:45.269: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:26:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:26:44 +0000 UTC  }]
Jan 28 22:27:45.269: INFO: ss-1  jerma-server-mvvl6gufaqub  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:15 +0000 UTC  }]
Jan 28 22:27:45.269: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:15 +0000 UTC  }]
Jan 28 22:27:45.270: INFO: 
Jan 28 22:27:45.270: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 28 22:27:46.281: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 28 22:27:46.281: INFO: ss-0  jerma-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:26:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:26:44 +0000 UTC  }]
Jan 28 22:27:46.281: INFO: ss-1  jerma-server-mvvl6gufaqub  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:15 +0000 UTC  }]
Jan 28 22:27:46.282: INFO: ss-2  jerma-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:15 +0000 UTC  }]
Jan 28 22:27:46.282: INFO: 
Jan 28 22:27:46.282: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 28 22:27:47.292: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 28 22:27:47.292: INFO: ss-1  jerma-server-mvvl6gufaqub  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 22:27:15 +0000 UTC  }]
Jan 28 22:27:47.293: INFO: 
Jan 28 22:27:47.293: INFO: StatefulSet ss has not reached scale 0, at 1
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-1724
Jan 28 22:27:48.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1724 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 28 22:27:48.589: INFO: rc: 1
Jan 28 22:27:48.589: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1724 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Jan 28 22:27:58.591: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1724 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 28 22:27:58.761: INFO: rc: 1
Jan 28 22:27:58.761: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1724 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 28 22:28:08.762: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1724 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 28 22:28:08.967: INFO: rc: 1
Jan 28 22:28:08.968: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1724 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 28 22:28:18.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1724 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 28 22:28:19.136: INFO: rc: 1
Jan 28 22:28:19.136: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1724 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 28 22:28:29.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1724 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 28 22:28:29.335: INFO: rc: 1
Jan 28 22:28:29.335: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1724 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 28 22:28:39.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1724 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 28 22:28:39.514: INFO: rc: 1
Jan 28 22:28:39.514: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1724 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 28 22:28:49.515: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1724 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 28 22:28:49.730: INFO: rc: 1
Jan 28 22:28:49.731: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1724 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 28 22:28:59.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1724 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 28 22:28:59.921: INFO: rc: 1
Jan 28 22:28:59.922: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1724 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 28 22:29:09.922: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1724 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 28 22:29:10.168: INFO: rc: 1
Jan 28 22:29:10.169: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1724 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 28 22:29:20.169: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1724 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 28 22:29:20.353: INFO: rc: 1
Jan 28 22:29:20.353: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1724 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 28 22:29:30.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1724 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 28 22:29:30.553: INFO: rc: 1
Jan 28 22:29:30.554: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1724 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 28 22:29:40.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1724 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 28 22:29:40.690: INFO: rc: 1
Jan 28 22:29:40.690: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1724 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 28 22:29:50.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1724 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 28 22:29:50.862: INFO: rc: 1
Jan 28 22:29:50.862: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1724 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 28 22:30:00.864: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1724 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 28 22:30:01.080: INFO: rc: 1
Jan 28 22:30:01.081: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1724 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 28 22:30:11.081: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1724 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 28 22:30:11.261: INFO: rc: 1
Jan 28 22:30:11.261: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1724 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 28 22:30:21.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1724 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 28 22:30:21.425: INFO: rc: 1
Jan 28 22:30:21.425: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1724 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 28 22:30:31.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1724 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 28 22:30:31.627: INFO: rc: 1
Jan 28 22:30:31.627: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1724 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 28 22:30:41.630: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1724 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 28 22:30:41.864: INFO: rc: 1
Jan 28 22:30:41.864: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1724 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 28 22:30:51.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1724 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 28 22:30:52.126: INFO: rc: 1
Jan 28 22:30:52.126: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1724 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 28 22:31:02.127: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1724 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 28 22:31:02.316: INFO: rc: 1
Jan 28 22:31:02.316: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1724 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 28 22:31:12.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1724 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 28 22:31:12.497: INFO: rc: 1
Jan 28 22:31:12.498: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1724 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 28 22:31:22.499: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1724 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 28 22:31:22.708: INFO: rc: 1
Jan 28 22:31:22.709: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1724 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 28 22:31:32.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1724 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 28 22:31:32.899: INFO: rc: 1
Jan 28 22:31:32.900: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1724 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 28 22:31:42.900: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1724 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 28 22:31:43.091: INFO: rc: 1
Jan 28 22:31:43.091: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1724 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 28 22:31:53.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1724 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 28 22:31:53.268: INFO: rc: 1
Jan 28 22:31:53.268: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1724 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 28 22:32:03.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1724 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 28 22:32:03.472: INFO: rc: 1
Jan 28 22:32:03.472: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1724 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 28 22:32:13.473: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1724 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 28 22:32:13.675: INFO: rc: 1
Jan 28 22:32:13.676: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1724 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 28 22:32:23.677: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1724 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 28 22:32:23.892: INFO: rc: 1
Jan 28 22:32:23.893: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1724 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 28 22:32:33.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1724 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 28 22:32:34.143: INFO: rc: 1
Jan 28 22:32:34.143: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1724 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 28 22:32:44.144: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1724 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 28 22:32:44.320: INFO: rc: 1
Jan 28 22:32:44.320: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1724 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 28 22:32:54.321: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1724 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 28 22:32:54.446: INFO: rc: 1
Jan 28 22:32:54.446: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: 
Jan 28 22:32:54.446: INFO: Scaling statefulset ss to 0
Jan 28 22:32:54.461: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Jan 28 22:32:54.464: INFO: Deleting all statefulset in ns statefulset-1724
Jan 28 22:32:54.467: INFO: Scaling statefulset ss to 0
Jan 28 22:32:54.476: INFO: Waiting for statefulset status.replicas updated to 0
Jan 28 22:32:54.479: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:32:54.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-1724" for this suite.

• [SLOW TEST:370.231 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":180,"skipped":2916,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:32:54.542: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Jan 28 22:32:54.622: INFO: Pod name pod-release: Found 0 pods out of 1
Jan 28 22:32:59.630: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:32:59.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-7378" for this suite.

• [SLOW TEST:5.311 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":181,"skipped":2930,"failed":0}
SSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:32:59.853: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test hostPath mode
Jan 28 22:32:59.998: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-4048" to be "success or failure"
Jan 28 22:33:00.040: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 40.990956ms
Jan 28 22:33:02.258: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.259428869s
Jan 28 22:33:04.270: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.271378918s
Jan 28 22:33:06.322: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.323875604s
Jan 28 22:33:08.333: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.334245913s
Jan 28 22:33:10.341: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.342834262s
Jan 28 22:33:12.347: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 12.348121375s
Jan 28 22:33:14.353: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 14.354131264s
Jan 28 22:33:16.365: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.366273044s
STEP: Saw pod success
Jan 28 22:33:16.365: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Jan 28 22:33:16.370: INFO: Trying to get logs from node jerma-node pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Jan 28 22:33:16.431: INFO: Waiting for pod pod-host-path-test to disappear
Jan 28 22:33:16.456: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:33:16.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-4048" for this suite.

• [SLOW TEST:16.618 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":182,"skipped":2933,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:33:16.472: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1576
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jan 28 22:33:16.567: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-2530'
Jan 28 22:33:16.714: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 28 22:33:16.714: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n"
STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created
[AfterEach] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1582
Jan 28 22:33:18.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-2530'
Jan 28 22:33:19.028: INFO: stderr: ""
Jan 28 22:33:19.028: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:33:19.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2530" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image  [Conformance]","total":278,"completed":183,"skipped":2948,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:33:19.131: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 28 22:33:19.327: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Jan 28 22:33:19.365: INFO: Number of nodes with available pods: 0
Jan 28 22:33:19.365: INFO: Node jerma-node is running more than one daemon pod
Jan 28 22:33:20.382: INFO: Number of nodes with available pods: 0
Jan 28 22:33:20.382: INFO: Node jerma-node is running more than one daemon pod
Jan 28 22:33:21.378: INFO: Number of nodes with available pods: 0
Jan 28 22:33:21.378: INFO: Node jerma-node is running more than one daemon pod
Jan 28 22:33:22.418: INFO: Number of nodes with available pods: 0
Jan 28 22:33:22.419: INFO: Node jerma-node is running more than one daemon pod
Jan 28 22:33:23.383: INFO: Number of nodes with available pods: 0
Jan 28 22:33:23.383: INFO: Node jerma-node is running more than one daemon pod
Jan 28 22:33:24.392: INFO: Number of nodes with available pods: 0
Jan 28 22:33:24.393: INFO: Node jerma-node is running more than one daemon pod
Jan 28 22:33:26.559: INFO: Number of nodes with available pods: 0
Jan 28 22:33:26.560: INFO: Node jerma-node is running more than one daemon pod
Jan 28 22:33:27.928: INFO: Number of nodes with available pods: 0
Jan 28 22:33:27.929: INFO: Node jerma-node is running more than one daemon pod
Jan 28 22:33:28.385: INFO: Number of nodes with available pods: 0
Jan 28 22:33:28.385: INFO: Node jerma-node is running more than one daemon pod
Jan 28 22:33:29.376: INFO: Number of nodes with available pods: 1
Jan 28 22:33:29.377: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 28 22:33:30.380: INFO: Number of nodes with available pods: 2
Jan 28 22:33:30.380: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Jan 28 22:33:30.425: INFO: Wrong image for pod: daemon-set-cn7wf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 22:33:30.425: INFO: Wrong image for pod: daemon-set-w8czj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 22:33:34.767: INFO: Wrong image for pod: daemon-set-cn7wf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 22:33:34.768: INFO: Wrong image for pod: daemon-set-w8czj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 22:33:35.458: INFO: Wrong image for pod: daemon-set-cn7wf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 22:33:35.458: INFO: Wrong image for pod: daemon-set-w8czj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 22:33:36.474: INFO: Wrong image for pod: daemon-set-cn7wf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 22:33:36.475: INFO: Wrong image for pod: daemon-set-w8czj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 22:33:37.457: INFO: Wrong image for pod: daemon-set-cn7wf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 22:33:37.457: INFO: Wrong image for pod: daemon-set-w8czj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 22:33:38.457: INFO: Wrong image for pod: daemon-set-cn7wf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 22:33:38.458: INFO: Pod daemon-set-cn7wf is not available
Jan 28 22:33:38.458: INFO: Wrong image for pod: daemon-set-w8czj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 22:33:39.457: INFO: Wrong image for pod: daemon-set-cn7wf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 22:33:39.457: INFO: Pod daemon-set-cn7wf is not available
Jan 28 22:33:39.457: INFO: Wrong image for pod: daemon-set-w8czj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 22:33:40.461: INFO: Wrong image for pod: daemon-set-cn7wf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 22:33:40.461: INFO: Pod daemon-set-cn7wf is not available
Jan 28 22:33:40.461: INFO: Wrong image for pod: daemon-set-w8czj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 22:33:41.457: INFO: Wrong image for pod: daemon-set-cn7wf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 22:33:41.458: INFO: Pod daemon-set-cn7wf is not available
Jan 28 22:33:41.458: INFO: Wrong image for pod: daemon-set-w8czj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 22:33:42.458: INFO: Wrong image for pod: daemon-set-cn7wf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 22:33:42.458: INFO: Pod daemon-set-cn7wf is not available
Jan 28 22:33:42.458: INFO: Wrong image for pod: daemon-set-w8czj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 22:33:43.464: INFO: Wrong image for pod: daemon-set-w8czj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 22:33:43.464: INFO: Pod daemon-set-whz96 is not available
Jan 28 22:33:44.459: INFO: Wrong image for pod: daemon-set-w8czj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 22:33:44.459: INFO: Pod daemon-set-whz96 is not available
Jan 28 22:33:45.456: INFO: Wrong image for pod: daemon-set-w8czj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 22:33:45.456: INFO: Pod daemon-set-whz96 is not available
Jan 28 22:33:47.333: INFO: Wrong image for pod: daemon-set-w8czj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 22:33:47.334: INFO: Pod daemon-set-whz96 is not available
Jan 28 22:33:47.595: INFO: Wrong image for pod: daemon-set-w8czj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 22:33:47.596: INFO: Pod daemon-set-whz96 is not available
Jan 28 22:33:48.460: INFO: Wrong image for pod: daemon-set-w8czj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 22:33:48.461: INFO: Pod daemon-set-whz96 is not available
Jan 28 22:33:49.460: INFO: Wrong image for pod: daemon-set-w8czj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 22:33:49.460: INFO: Pod daemon-set-whz96 is not available
Jan 28 22:33:50.469: INFO: Wrong image for pod: daemon-set-w8czj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 22:33:51.459: INFO: Wrong image for pod: daemon-set-w8czj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 22:33:52.456: INFO: Wrong image for pod: daemon-set-w8czj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 22:33:53.457: INFO: Wrong image for pod: daemon-set-w8czj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 22:33:54.459: INFO: Wrong image for pod: daemon-set-w8czj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 22:33:54.459: INFO: Pod daemon-set-w8czj is not available
Jan 28 22:33:55.459: INFO: Pod daemon-set-w5qmd is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Jan 28 22:33:55.477: INFO: Number of nodes with available pods: 1
Jan 28 22:33:55.478: INFO: Node jerma-node is running more than one daemon pod
Jan 28 22:33:56.497: INFO: Number of nodes with available pods: 1
Jan 28 22:33:56.498: INFO: Node jerma-node is running more than one daemon pod
Jan 28 22:33:57.495: INFO: Number of nodes with available pods: 1
Jan 28 22:33:57.495: INFO: Node jerma-node is running more than one daemon pod
Jan 28 22:33:58.498: INFO: Number of nodes with available pods: 1
Jan 28 22:33:58.498: INFO: Node jerma-node is running more than one daemon pod
Jan 28 22:33:59.511: INFO: Number of nodes with available pods: 1
Jan 28 22:33:59.511: INFO: Node jerma-node is running more than one daemon pod
Jan 28 22:34:00.500: INFO: Number of nodes with available pods: 1
Jan 28 22:34:00.500: INFO: Node jerma-node is running more than one daemon pod
Jan 28 22:34:01.499: INFO: Number of nodes with available pods: 2
Jan 28 22:34:01.499: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8159, will wait for the garbage collector to delete the pods
Jan 28 22:34:01.587: INFO: Deleting DaemonSet.extensions daemon-set took: 9.700294ms
Jan 28 22:34:01.888: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.645412ms
Jan 28 22:34:08.712: INFO: Number of nodes with available pods: 0
Jan 28 22:34:08.712: INFO: Number of running nodes: 0, number of available pods: 0
Jan 28 22:34:08.717: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8159/daemonsets","resourceVersion":"4978353"},"items":null}

Jan 28 22:34:08.720: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8159/pods","resourceVersion":"4978353"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:34:08.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-8159" for this suite.

• [SLOW TEST:49.616 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":184,"skipped":2984,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:34:08.750: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 28 22:34:08.943: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d01d5299-b3aa-4e56-8839-95cd5749eca3" in namespace "projected-3371" to be "success or failure"
Jan 28 22:34:08.997: INFO: Pod "downwardapi-volume-d01d5299-b3aa-4e56-8839-95cd5749eca3": Phase="Pending", Reason="", readiness=false. Elapsed: 53.80444ms
Jan 28 22:34:11.001: INFO: Pod "downwardapi-volume-d01d5299-b3aa-4e56-8839-95cd5749eca3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057591572s
Jan 28 22:34:13.018: INFO: Pod "downwardapi-volume-d01d5299-b3aa-4e56-8839-95cd5749eca3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074091209s
Jan 28 22:34:15.025: INFO: Pod "downwardapi-volume-d01d5299-b3aa-4e56-8839-95cd5749eca3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.081768151s
Jan 28 22:34:17.075: INFO: Pod "downwardapi-volume-d01d5299-b3aa-4e56-8839-95cd5749eca3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.131845993s
STEP: Saw pod success
Jan 28 22:34:17.076: INFO: Pod "downwardapi-volume-d01d5299-b3aa-4e56-8839-95cd5749eca3" satisfied condition "success or failure"
Jan 28 22:34:17.081: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-d01d5299-b3aa-4e56-8839-95cd5749eca3 container client-container: 
STEP: delete the pod
Jan 28 22:34:17.157: INFO: Waiting for pod downwardapi-volume-d01d5299-b3aa-4e56-8839-95cd5749eca3 to disappear
Jan 28 22:34:17.165: INFO: Pod downwardapi-volume-d01d5299-b3aa-4e56-8839-95cd5749eca3 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:34:17.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3371" for this suite.

• [SLOW TEST:8.470 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":185,"skipped":3020,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:34:17.221: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name s-test-opt-del-df4d96cc-0ee0-405b-966c-b668e75bc4df
STEP: Creating secret with name s-test-opt-upd-8a8b3156-b54e-4a9a-a8e6-1e80f491449d
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-df4d96cc-0ee0-405b-966c-b668e75bc4df
STEP: Updating secret s-test-opt-upd-8a8b3156-b54e-4a9a-a8e6-1e80f491449d
STEP: Creating secret with name s-test-opt-create-8c6e5b3f-e549-42ad-8c3f-3adc62dedf4e
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:34:33.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5037" for this suite.

• [SLOW TEST:16.375 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":186,"skipped":3031,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:34:33.596: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 28 22:34:34.776: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 28 22:34:36.796: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847674, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847674, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847674, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847674, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 22:34:38.899: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847674, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847674, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847674, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847674, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 22:34:40.804: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847674, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847674, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847674, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847674, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 22:34:42.812: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847674, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847674, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847674, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847674, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 22:34:44.818: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847674, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847674, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847674, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847674, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 28 22:34:47.856: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 28 22:34:47.863: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the custom resource webhook via the AdmissionRegistration API
STEP: Creating a custom resource that should be denied by the webhook
STEP: Creating a custom resource whose deletion would be denied by the webhook
STEP: Updating the custom resource with disallowed data should be denied
STEP: Deleting the custom resource should be denied
STEP: Remove the offending key and value from the custom resource data
STEP: Deleting the updated custom resource should be successful
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:34:49.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-76" for this suite.
STEP: Destroying namespace "webhook-76-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:15.851 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":187,"skipped":3036,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:34:49.449: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5843.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-5843.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5843.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5843.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-5843.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5843.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 28 22:35:03.615: INFO: DNS probes using dns-5843/dns-test-ffaadf11-8eb4-417a-b02b-b288820cc1c3 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:35:03.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5843" for this suite.

• [SLOW TEST:14.234 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":188,"skipped":3051,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:35:03.685: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: executing a command with run --rm and attach with stdin
Jan 28 22:35:03.855: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2886 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Jan 28 22:35:16.157: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0128 22:35:14.788513    4593 log.go:172] (0xc0001046e0) (0xc0008d0320) Create stream\nI0128 22:35:14.788787    4593 log.go:172] (0xc0001046e0) (0xc0008d0320) Stream added, broadcasting: 1\nI0128 22:35:14.792138    4593 log.go:172] (0xc0001046e0) Reply frame received for 1\nI0128 22:35:14.792173    4593 log.go:172] (0xc0001046e0) (0xc000705ae0) Create stream\nI0128 22:35:14.792184    4593 log.go:172] (0xc0001046e0) (0xc000705ae0) Stream added, broadcasting: 3\nI0128 22:35:14.793355    4593 log.go:172] (0xc0001046e0) Reply frame received for 3\nI0128 22:35:14.793382    4593 log.go:172] (0xc0001046e0) (0xc0008d03c0) Create stream\nI0128 22:35:14.793391    4593 log.go:172] (0xc0001046e0) (0xc0008d03c0) Stream added, broadcasting: 5\nI0128 22:35:14.794409    4593 log.go:172] (0xc0001046e0) Reply frame received for 5\nI0128 22:35:14.794432    4593 log.go:172] (0xc0001046e0) (0xc0008d0460) Create stream\nI0128 22:35:14.794441    4593 log.go:172] (0xc0001046e0) (0xc0008d0460) Stream added, broadcasting: 7\nI0128 22:35:14.795552    4593 log.go:172] (0xc0001046e0) Reply frame received for 7\nI0128 22:35:14.796018    4593 log.go:172] (0xc000705ae0) (3) Writing data frame\nI0128 22:35:14.796153    4593 log.go:172] (0xc000705ae0) (3) Writing data frame\nI0128 22:35:14.798347    4593 log.go:172] (0xc0001046e0) Data frame received for 5\nI0128 22:35:14.798368    4593 log.go:172] (0xc0008d03c0) (5) Data frame handling\nI0128 22:35:14.798391    4593 log.go:172] (0xc0008d03c0) (5) Data frame sent\nI0128 22:35:14.800492    4593 log.go:172] (0xc0001046e0) Data frame received for 5\nI0128 22:35:14.800509    4593 log.go:172] (0xc0008d03c0) (5) Data frame handling\nI0128 22:35:14.800523    4593 log.go:172] (0xc0008d03c0) (5) Data frame sent\nI0128 22:35:16.099105    4593 log.go:172] (0xc0001046e0) (0xc000705ae0) Stream removed, broadcasting: 3\nI0128 22:35:16.099484    4593 log.go:172] (0xc0001046e0) Data frame received for 1\nI0128 22:35:16.099520    4593 log.go:172] (0xc0008d0320) (1) Data frame handling\nI0128 22:35:16.099592    4593 log.go:172] (0xc0008d0320) (1) Data frame sent\nI0128 22:35:16.099609    4593 log.go:172] (0xc0001046e0) (0xc0008d0320) Stream removed, broadcasting: 1\nI0128 22:35:16.101559    4593 log.go:172] (0xc0001046e0) (0xc0008d0460) Stream removed, broadcasting: 7\nI0128 22:35:16.101635    4593 log.go:172] (0xc0001046e0) (0xc0008d03c0) Stream removed, broadcasting: 5\nI0128 22:35:16.101667    4593 log.go:172] (0xc0001046e0) Go away received\nI0128 22:35:16.102518    4593 log.go:172] (0xc0001046e0) (0xc0008d0320) Stream removed, broadcasting: 1\nI0128 22:35:16.102788    4593 log.go:172] (0xc0001046e0) (0xc000705ae0) Stream removed, broadcasting: 3\nI0128 22:35:16.102826    4593 log.go:172] (0xc0001046e0) (0xc0008d03c0) Stream removed, broadcasting: 5\nI0128 22:35:16.102874    4593 log.go:172] (0xc0001046e0) (0xc0008d0460) Stream removed, broadcasting: 7\n"
Jan 28 22:35:16.158: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:35:18.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2886" for this suite.

• [SLOW TEST:14.494 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1924
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job  [Conformance]","total":278,"completed":189,"skipped":3081,"failed":0}
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:35:18.180: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Jan 28 22:35:28.903: INFO: Successfully updated pod "labelsupdateffd2b516-3718-453b-9f56-6c4d430692f2"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:35:30.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7658" for this suite.

• [SLOW TEST:12.793 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":190,"skipped":3081,"failed":0}
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:35:30.974: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-a878b593-e2ce-4b33-9667-a2efff1a0a8f
STEP: Creating a pod to test consume configMaps
Jan 28 22:35:31.123: INFO: Waiting up to 5m0s for pod "pod-configmaps-db4ed91f-4517-4f5f-a41b-18380dfbbfa8" in namespace "configmap-5843" to be "success or failure"
Jan 28 22:35:31.133: INFO: Pod "pod-configmaps-db4ed91f-4517-4f5f-a41b-18380dfbbfa8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.401806ms
Jan 28 22:35:33.142: INFO: Pod "pod-configmaps-db4ed91f-4517-4f5f-a41b-18380dfbbfa8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019628269s
Jan 28 22:35:35.153: INFO: Pod "pod-configmaps-db4ed91f-4517-4f5f-a41b-18380dfbbfa8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029698617s
Jan 28 22:35:37.162: INFO: Pod "pod-configmaps-db4ed91f-4517-4f5f-a41b-18380dfbbfa8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03906227s
Jan 28 22:35:39.169: INFO: Pod "pod-configmaps-db4ed91f-4517-4f5f-a41b-18380dfbbfa8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.046094249s
Jan 28 22:35:41.176: INFO: Pod "pod-configmaps-db4ed91f-4517-4f5f-a41b-18380dfbbfa8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.053073365s
STEP: Saw pod success
Jan 28 22:35:41.176: INFO: Pod "pod-configmaps-db4ed91f-4517-4f5f-a41b-18380dfbbfa8" satisfied condition "success or failure"
Jan 28 22:35:41.180: INFO: Trying to get logs from node jerma-node pod pod-configmaps-db4ed91f-4517-4f5f-a41b-18380dfbbfa8 container configmap-volume-test: 
STEP: delete the pod
Jan 28 22:35:41.245: INFO: Waiting for pod pod-configmaps-db4ed91f-4517-4f5f-a41b-18380dfbbfa8 to disappear
Jan 28 22:35:41.258: INFO: Pod pod-configmaps-db4ed91f-4517-4f5f-a41b-18380dfbbfa8 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:35:41.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5843" for this suite.

• [SLOW TEST:10.308 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":191,"skipped":3081,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:35:41.286: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-79e47f08-bf0a-49fe-9fd4-35c89509a012
STEP: Creating a pod to test consume secrets
Jan 28 22:35:41.418: INFO: Waiting up to 5m0s for pod "pod-secrets-b5a1a73a-898e-46df-8378-057166b94d2f" in namespace "secrets-2421" to be "success or failure"
Jan 28 22:35:41.504: INFO: Pod "pod-secrets-b5a1a73a-898e-46df-8378-057166b94d2f": Phase="Pending", Reason="", readiness=false. Elapsed: 85.265638ms
Jan 28 22:35:43.512: INFO: Pod "pod-secrets-b5a1a73a-898e-46df-8378-057166b94d2f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094047043s
Jan 28 22:35:45.519: INFO: Pod "pod-secrets-b5a1a73a-898e-46df-8378-057166b94d2f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.10031873s
Jan 28 22:35:47.525: INFO: Pod "pod-secrets-b5a1a73a-898e-46df-8378-057166b94d2f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.106778512s
Jan 28 22:35:49.533: INFO: Pod "pod-secrets-b5a1a73a-898e-46df-8378-057166b94d2f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.114370373s
STEP: Saw pod success
Jan 28 22:35:49.533: INFO: Pod "pod-secrets-b5a1a73a-898e-46df-8378-057166b94d2f" satisfied condition "success or failure"
Jan 28 22:35:49.537: INFO: Trying to get logs from node jerma-node pod pod-secrets-b5a1a73a-898e-46df-8378-057166b94d2f container secret-volume-test: 
STEP: delete the pod
Jan 28 22:35:49.573: INFO: Waiting for pod pod-secrets-b5a1a73a-898e-46df-8378-057166b94d2f to disappear
Jan 28 22:35:49.588: INFO: Pod pod-secrets-b5a1a73a-898e-46df-8378-057166b94d2f no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:35:49.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2421" for this suite.

• [SLOW TEST:8.361 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":192,"skipped":3178,"failed":0}
SSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:35:49.648: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1841
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jan 28 22:35:49.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-6385'
Jan 28 22:35:50.024: INFO: stderr: ""
Jan 28 22:35:50.024: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod was created
[AfterEach] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1846
Jan 28 22:35:50.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-6385'
Jan 28 22:35:57.035: INFO: stderr: ""
Jan 28 22:35:57.035: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:35:57.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6385" for this suite.

• [SLOW TEST:7.430 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1837
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":278,"completed":193,"skipped":3183,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:35:57.079: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 28 22:35:57.219: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:35:58.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-7771" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":278,"completed":194,"skipped":3200,"failed":0}
SSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:35:58.331: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap configmap-8158/configmap-test-d3901f76-ed97-4bb3-bee1-a27304bc77a2
STEP: Creating a pod to test consume configMaps
Jan 28 22:35:58.442: INFO: Waiting up to 5m0s for pod "pod-configmaps-a7572ad6-e0ff-47fc-ab0e-a44a3dc1fd91" in namespace "configmap-8158" to be "success or failure"
Jan 28 22:35:58.449: INFO: Pod "pod-configmaps-a7572ad6-e0ff-47fc-ab0e-a44a3dc1fd91": Phase="Pending", Reason="", readiness=false. Elapsed: 6.498851ms
Jan 28 22:36:00.459: INFO: Pod "pod-configmaps-a7572ad6-e0ff-47fc-ab0e-a44a3dc1fd91": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016624948s
Jan 28 22:36:02.467: INFO: Pod "pod-configmaps-a7572ad6-e0ff-47fc-ab0e-a44a3dc1fd91": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024178834s
Jan 28 22:36:04.481: INFO: Pod "pod-configmaps-a7572ad6-e0ff-47fc-ab0e-a44a3dc1fd91": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038128655s
Jan 28 22:36:06.489: INFO: Pod "pod-configmaps-a7572ad6-e0ff-47fc-ab0e-a44a3dc1fd91": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.046161677s
STEP: Saw pod success
Jan 28 22:36:06.489: INFO: Pod "pod-configmaps-a7572ad6-e0ff-47fc-ab0e-a44a3dc1fd91" satisfied condition "success or failure"
Jan 28 22:36:06.492: INFO: Trying to get logs from node jerma-node pod pod-configmaps-a7572ad6-e0ff-47fc-ab0e-a44a3dc1fd91 container env-test: 
STEP: delete the pod
Jan 28 22:36:06.552: INFO: Waiting for pod pod-configmaps-a7572ad6-e0ff-47fc-ab0e-a44a3dc1fd91 to disappear
Jan 28 22:36:06.564: INFO: Pod pod-configmaps-a7572ad6-e0ff-47fc-ab0e-a44a3dc1fd91 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:36:06.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8158" for this suite.

• [SLOW TEST:8.267 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":195,"skipped":3208,"failed":0}
SSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:36:06.599: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:36:16.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-7709" for this suite.

• [SLOW TEST:10.157 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":196,"skipped":3211,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] Lease 
  lease API should be available [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Lease
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:36:16.756: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename lease-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] lease API should be available [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Lease
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:36:16.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-6886" for this suite.
•{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":197,"skipped":3219,"failed":0}
SS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:36:16.963: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-map-e908a155-b10b-4567-8003-a867090d66a0
STEP: Creating a pod to test consume configMaps
Jan 28 22:36:17.111: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1a6bde58-a857-4d05-b9a9-28f502bf03c0" in namespace "projected-6067" to be "success or failure"
Jan 28 22:36:17.128: INFO: Pod "pod-projected-configmaps-1a6bde58-a857-4d05-b9a9-28f502bf03c0": Phase="Pending", Reason="", readiness=false. Elapsed: 17.146569ms
Jan 28 22:36:19.133: INFO: Pod "pod-projected-configmaps-1a6bde58-a857-4d05-b9a9-28f502bf03c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022032216s
Jan 28 22:36:21.140: INFO: Pod "pod-projected-configmaps-1a6bde58-a857-4d05-b9a9-28f502bf03c0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029371487s
Jan 28 22:36:23.148: INFO: Pod "pod-projected-configmaps-1a6bde58-a857-4d05-b9a9-28f502bf03c0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036917624s
Jan 28 22:36:25.155: INFO: Pod "pod-projected-configmaps-1a6bde58-a857-4d05-b9a9-28f502bf03c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.04433981s
STEP: Saw pod success
Jan 28 22:36:25.155: INFO: Pod "pod-projected-configmaps-1a6bde58-a857-4d05-b9a9-28f502bf03c0" satisfied condition "success or failure"
Jan 28 22:36:25.159: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-1a6bde58-a857-4d05-b9a9-28f502bf03c0 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 28 22:36:25.196: INFO: Waiting for pod pod-projected-configmaps-1a6bde58-a857-4d05-b9a9-28f502bf03c0 to disappear
Jan 28 22:36:25.228: INFO: Pod pod-projected-configmaps-1a6bde58-a857-4d05-b9a9-28f502bf03c0 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:36:25.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6067" for this suite.

• [SLOW TEST:8.278 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":198,"skipped":3221,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:36:25.242: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 28 22:36:25.311: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Jan 28 22:36:25.487: INFO: stderr: ""
Jan 28 22:36:25.487: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.0\", GitCommit:\"70132b0f130acc0bed193d9ba59dd186f0e634cf\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T16:10:40Z\", GoVersion:\"go1.13.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.0\", GitCommit:\"70132b0f130acc0bed193d9ba59dd186f0e634cf\", GitTreeState:\"clean\", BuildDate:\"2019-12-07T21:12:17Z\", GoVersion:\"go1.13.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:36:25.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6706" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":278,"completed":199,"skipped":3244,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:36:25.495: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 28 22:36:25.655: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1d8e2749-087e-40b4-a8d2-73c0a0653f64" in namespace "downward-api-9918" to be "success or failure"
Jan 28 22:36:25.673: INFO: Pod "downwardapi-volume-1d8e2749-087e-40b4-a8d2-73c0a0653f64": Phase="Pending", Reason="", readiness=false. Elapsed: 17.917038ms
Jan 28 22:36:27.680: INFO: Pod "downwardapi-volume-1d8e2749-087e-40b4-a8d2-73c0a0653f64": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024366504s
Jan 28 22:36:29.689: INFO: Pod "downwardapi-volume-1d8e2749-087e-40b4-a8d2-73c0a0653f64": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033915038s
Jan 28 22:36:31.698: INFO: Pod "downwardapi-volume-1d8e2749-087e-40b4-a8d2-73c0a0653f64": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042454568s
Jan 28 22:36:33.707: INFO: Pod "downwardapi-volume-1d8e2749-087e-40b4-a8d2-73c0a0653f64": Phase="Pending", Reason="", readiness=false. Elapsed: 8.051582631s
Jan 28 22:36:35.714: INFO: Pod "downwardapi-volume-1d8e2749-087e-40b4-a8d2-73c0a0653f64": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.058925073s
STEP: Saw pod success
Jan 28 22:36:35.714: INFO: Pod "downwardapi-volume-1d8e2749-087e-40b4-a8d2-73c0a0653f64" satisfied condition "success or failure"
Jan 28 22:36:35.719: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-1d8e2749-087e-40b4-a8d2-73c0a0653f64 container client-container: 
STEP: delete the pod
Jan 28 22:36:35.973: INFO: Waiting for pod downwardapi-volume-1d8e2749-087e-40b4-a8d2-73c0a0653f64 to disappear
Jan 28 22:36:36.030: INFO: Pod downwardapi-volume-1d8e2749-087e-40b4-a8d2-73c0a0653f64 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:36:36.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9918" for this suite.

• [SLOW TEST:10.548 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":200,"skipped":3262,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:36:36.044: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 28 22:36:36.961: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 28 22:36:38.986: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847797, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847797, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847797, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847796, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 22:36:41.003: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847797, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847797, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847797, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847796, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 22:36:43.002: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847797, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847797, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847797, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847796, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 22:36:44.993: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847797, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847797, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847797, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847796, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 28 22:36:48.073: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Creating a dummy validating-webhook-configuration object
STEP: Deleting the validating-webhook-configuration, which should be possible to remove
STEP: Creating a dummy mutating-webhook-configuration object
STEP: Deleting the mutating-webhook-configuration, which should be possible to remove
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:36:48.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5099" for this suite.
STEP: Destroying namespace "webhook-5099-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:12.494 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":201,"skipped":3274,"failed":0}
SSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:36:48.539: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan 28 22:37:01.205: INFO: Successfully updated pod "pod-update-activedeadlineseconds-036a82ac-3d5c-4f61-bdcb-f5710a614508"
Jan 28 22:37:01.205: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-036a82ac-3d5c-4f61-bdcb-f5710a614508" in namespace "pods-5063" to be "terminated due to deadline exceeded"
Jan 28 22:37:01.215: INFO: Pod "pod-update-activedeadlineseconds-036a82ac-3d5c-4f61-bdcb-f5710a614508": Phase="Running", Reason="", readiness=true. Elapsed: 9.917783ms
Jan 28 22:37:03.224: INFO: Pod "pod-update-activedeadlineseconds-036a82ac-3d5c-4f61-bdcb-f5710a614508": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.018285758s
Jan 28 22:37:03.224: INFO: Pod "pod-update-activedeadlineseconds-036a82ac-3d5c-4f61-bdcb-f5710a614508" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:37:03.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5063" for this suite.

• [SLOW TEST:14.703 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":202,"skipped":3278,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:37:03.244: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-1b1c72da-6c96-4fcd-bd8d-9a6f7ac8718f
STEP: Creating a pod to test consume secrets
Jan 28 22:37:03.365: INFO: Waiting up to 5m0s for pod "pod-secrets-798a9cb7-fcda-45f2-b14c-c1296f23e4b2" in namespace "secrets-7864" to be "success or failure"
Jan 28 22:37:03.374: INFO: Pod "pod-secrets-798a9cb7-fcda-45f2-b14c-c1296f23e4b2": Phase="Pending", Reason="", readiness=false. Elapsed: 9.246641ms
Jan 28 22:37:05.382: INFO: Pod "pod-secrets-798a9cb7-fcda-45f2-b14c-c1296f23e4b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017153037s
Jan 28 22:37:07.388: INFO: Pod "pod-secrets-798a9cb7-fcda-45f2-b14c-c1296f23e4b2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023395039s
Jan 28 22:37:09.439: INFO: Pod "pod-secrets-798a9cb7-fcda-45f2-b14c-c1296f23e4b2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.074128854s
Jan 28 22:37:11.512: INFO: Pod "pod-secrets-798a9cb7-fcda-45f2-b14c-c1296f23e4b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.146913422s
STEP: Saw pod success
Jan 28 22:37:11.512: INFO: Pod "pod-secrets-798a9cb7-fcda-45f2-b14c-c1296f23e4b2" satisfied condition "success or failure"
Jan 28 22:37:11.520: INFO: Trying to get logs from node jerma-node pod pod-secrets-798a9cb7-fcda-45f2-b14c-c1296f23e4b2 container secret-volume-test: 
STEP: delete the pod
Jan 28 22:37:11.582: INFO: Waiting for pod pod-secrets-798a9cb7-fcda-45f2-b14c-c1296f23e4b2 to disappear
Jan 28 22:37:11.649: INFO: Pod pod-secrets-798a9cb7-fcda-45f2-b14c-c1296f23e4b2 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:37:11.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7864" for this suite.

• [SLOW TEST:8.418 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":203,"skipped":3303,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:37:11.663: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 28 22:37:19.959: INFO: Waiting up to 5m0s for pod "client-envvars-47a6bf2c-cce9-4a47-a816-3580f56bba56" in namespace "pods-7264" to be "success or failure"
Jan 28 22:37:19.979: INFO: Pod "client-envvars-47a6bf2c-cce9-4a47-a816-3580f56bba56": Phase="Pending", Reason="", readiness=false. Elapsed: 19.783438ms
Jan 28 22:37:21.983: INFO: Pod "client-envvars-47a6bf2c-cce9-4a47-a816-3580f56bba56": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024199192s
Jan 28 22:37:23.996: INFO: Pod "client-envvars-47a6bf2c-cce9-4a47-a816-3580f56bba56": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037185764s
Jan 28 22:37:26.002: INFO: Pod "client-envvars-47a6bf2c-cce9-4a47-a816-3580f56bba56": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043077208s
Jan 28 22:37:28.007: INFO: Pod "client-envvars-47a6bf2c-cce9-4a47-a816-3580f56bba56": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.04814573s
STEP: Saw pod success
Jan 28 22:37:28.007: INFO: Pod "client-envvars-47a6bf2c-cce9-4a47-a816-3580f56bba56" satisfied condition "success or failure"
Jan 28 22:37:28.011: INFO: Trying to get logs from node jerma-node pod client-envvars-47a6bf2c-cce9-4a47-a816-3580f56bba56 container env3cont: 
STEP: delete the pod
Jan 28 22:37:28.167: INFO: Waiting for pod client-envvars-47a6bf2c-cce9-4a47-a816-3580f56bba56 to disappear
Jan 28 22:37:28.173: INFO: Pod client-envvars-47a6bf2c-cce9-4a47-a816-3580f56bba56 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:37:28.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7264" for this suite.

• [SLOW TEST:16.524 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":204,"skipped":3333,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:37:28.187: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 28 22:37:28.727: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 28 22:37:30.742: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847848, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847848, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847848, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847848, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 22:37:32.749: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847848, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847848, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847848, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847848, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 22:37:34.747: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847848, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847848, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847848, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847848, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 22:37:36.750: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847848, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847848, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847848, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847848, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 22:37:38.749: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847848, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847848, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847848, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847848, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 28 22:37:41.786: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 28 22:37:41.795: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-1426-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource while v1 is storage version
STEP: Patching Custom Resource Definition to set v2 as storage
STEP: Patching the custom resource while v2 is storage version
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:37:43.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5442" for this suite.
STEP: Destroying namespace "webhook-5442-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:15.156 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":205,"skipped":3342,"failed":0}
SSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:37:43.346: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Jan 28 22:37:43.527: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 28 22:37:43.605: INFO: Waiting for terminating namespaces to be deleted...
Jan 28 22:37:43.666: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Jan 28 22:37:43.814: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Jan 28 22:37:43.814: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 28 22:37:43.814: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Jan 28 22:37:43.814: INFO: 	Container weave ready: true, restart count 1
Jan 28 22:37:43.814: INFO: 	Container weave-npc ready: true, restart count 0
Jan 28 22:37:43.814: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Jan 28 22:37:43.847: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 28 22:37:43.848: INFO: 	Container coredns ready: true, restart count 0
Jan 28 22:37:43.848: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 28 22:37:43.848: INFO: 	Container coredns ready: true, restart count 0
Jan 28 22:37:43.848: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Jan 28 22:37:43.848: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 28 22:37:43.848: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Jan 28 22:37:43.848: INFO: 	Container weave ready: true, restart count 0
Jan 28 22:37:43.848: INFO: 	Container weave-npc ready: true, restart count 0
Jan 28 22:37:43.848: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 28 22:37:43.848: INFO: 	Container kube-controller-manager ready: true, restart count 3
Jan 28 22:37:43.848: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 28 22:37:43.848: INFO: 	Container kube-scheduler ready: true, restart count 4
Jan 28 22:37:43.848: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 28 22:37:43.848: INFO: 	Container etcd ready: true, restart count 1
Jan 28 22:37:43.848: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 28 22:37:43.848: INFO: 	Container kube-apiserver ready: true, restart count 1
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: verifying the node has the label node jerma-node
STEP: verifying the node has the label node jerma-server-mvvl6gufaqub
Jan 28 22:37:44.015: INFO: Pod coredns-6955765f44-bhnn4 requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub
Jan 28 22:37:44.016: INFO: Pod coredns-6955765f44-bwd85 requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub
Jan 28 22:37:44.016: INFO: Pod etcd-jerma-server-mvvl6gufaqub requesting resource cpu=0m on Node jerma-server-mvvl6gufaqub
Jan 28 22:37:44.016: INFO: Pod kube-apiserver-jerma-server-mvvl6gufaqub requesting resource cpu=250m on Node jerma-server-mvvl6gufaqub
Jan 28 22:37:44.016: INFO: Pod kube-controller-manager-jerma-server-mvvl6gufaqub requesting resource cpu=200m on Node jerma-server-mvvl6gufaqub
Jan 28 22:37:44.016: INFO: Pod kube-proxy-chkps requesting resource cpu=0m on Node jerma-server-mvvl6gufaqub
Jan 28 22:37:44.016: INFO: Pod kube-proxy-dsf66 requesting resource cpu=0m on Node jerma-node
Jan 28 22:37:44.016: INFO: Pod kube-scheduler-jerma-server-mvvl6gufaqub requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub
Jan 28 22:37:44.016: INFO: Pod weave-net-kz8lv requesting resource cpu=20m on Node jerma-node
Jan 28 22:37:44.016: INFO: Pod weave-net-z6tjf requesting resource cpu=20m on Node jerma-server-mvvl6gufaqub
STEP: Starting Pods to consume most of the cluster CPU.
Jan 28 22:37:44.016: INFO: Creating a pod which consumes cpu=2786m on Node jerma-node
Jan 28 22:37:44.269: INFO: Creating a pod which consumes cpu=2261m on Node jerma-server-mvvl6gufaqub
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-595a87fc-0a90-464c-b7d0-61e8fdbb3df0.15ee2df87aea58b2], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1191/filler-pod-595a87fc-0a90-464c-b7d0-61e8fdbb3df0 to jerma-node]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-595a87fc-0a90-464c-b7d0-61e8fdbb3df0.15ee2df9f8a8cede], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-595a87fc-0a90-464c-b7d0-61e8fdbb3df0.15ee2dfabcaaa1b0], Reason = [Created], Message = [Created container filler-pod-595a87fc-0a90-464c-b7d0-61e8fdbb3df0]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-595a87fc-0a90-464c-b7d0-61e8fdbb3df0.15ee2dfada056603], Reason = [Started], Message = [Started container filler-pod-595a87fc-0a90-464c-b7d0-61e8fdbb3df0]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-a00576fd-522d-4c2a-a14c-9f2001544d8e.15ee2df87c2c33ca], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1191/filler-pod-a00576fd-522d-4c2a-a14c-9f2001544d8e to jerma-server-mvvl6gufaqub]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-a00576fd-522d-4c2a-a14c-9f2001544d8e.15ee2df993162a42], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-a00576fd-522d-4c2a-a14c-9f2001544d8e.15ee2dfa397bf0ce], Reason = [Created], Message = [Created container filler-pod-a00576fd-522d-4c2a-a14c-9f2001544d8e]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-a00576fd-522d-4c2a-a14c-9f2001544d8e.15ee2dfa5b9547c9], Reason = [Started], Message = [Started container filler-pod-a00576fd-522d-4c2a-a14c-9f2001544d8e]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15ee2dfb4a3d0e2b], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.]
STEP: removing the label node off the node jerma-node
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node jerma-server-mvvl6gufaqub
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:37:57.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-1191" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:14.350 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","total":278,"completed":206,"skipped":3346,"failed":0}
SSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:37:57.696: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Jan 28 22:37:57.800: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the sample API server.
Jan 28 22:37:58.334: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Jan 28 22:38:00.550: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847878, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847878, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847878, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847878, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 22:38:02.567: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847878, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847878, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847878, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847878, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 22:38:04.667: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847878, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847878, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847878, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847878, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 22:38:06.567: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847878, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847878, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847878, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847878, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 22:38:08.562: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847878, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847878, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847878, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847878, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 22:38:10.567: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847878, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847878, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847878, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847878, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 22:38:13.421: INFO: Waited 836.279764ms for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:38:13.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-6196" for this suite.

• [SLOW TEST:16.357 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":207,"skipped":3349,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:38:14.056: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a ResourceQuota with terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a long running pod
STEP: Ensuring resource quota with not terminating scope captures the pod usage
STEP: Ensuring resource quota with terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a terminating pod
STEP: Ensuring resource quota with terminating scope captures the pod usage
STEP: Ensuring resource quota with not terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:38:30.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-9134" for this suite.

• [SLOW TEST:16.821 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":208,"skipped":3354,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:38:30.877: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-066ad505-a308-43ab-a8f8-39ab084c0db5
STEP: Creating a pod to test consume secrets
Jan 28 22:38:31.227: INFO: Waiting up to 5m0s for pod "pod-secrets-701995fa-899d-4611-a5d2-6f4a59387121" in namespace "secrets-3668" to be "success or failure"
Jan 28 22:38:31.232: INFO: Pod "pod-secrets-701995fa-899d-4611-a5d2-6f4a59387121": Phase="Pending", Reason="", readiness=false. Elapsed: 5.681861ms
Jan 28 22:38:33.240: INFO: Pod "pod-secrets-701995fa-899d-4611-a5d2-6f4a59387121": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013437704s
Jan 28 22:38:35.246: INFO: Pod "pod-secrets-701995fa-899d-4611-a5d2-6f4a59387121": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018958992s
Jan 28 22:38:37.250: INFO: Pod "pod-secrets-701995fa-899d-4611-a5d2-6f4a59387121": Phase="Pending", Reason="", readiness=false. Elapsed: 6.023142392s
Jan 28 22:38:39.258: INFO: Pod "pod-secrets-701995fa-899d-4611-a5d2-6f4a59387121": Phase="Pending", Reason="", readiness=false. Elapsed: 8.030911882s
Jan 28 22:38:41.266: INFO: Pod "pod-secrets-701995fa-899d-4611-a5d2-6f4a59387121": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.039741965s
STEP: Saw pod success
Jan 28 22:38:41.267: INFO: Pod "pod-secrets-701995fa-899d-4611-a5d2-6f4a59387121" satisfied condition "success or failure"
Jan 28 22:38:41.277: INFO: Trying to get logs from node jerma-node pod pod-secrets-701995fa-899d-4611-a5d2-6f4a59387121 container secret-volume-test: 
STEP: delete the pod
Jan 28 22:38:41.385: INFO: Waiting for pod pod-secrets-701995fa-899d-4611-a5d2-6f4a59387121 to disappear
Jan 28 22:38:41.395: INFO: Pod pod-secrets-701995fa-899d-4611-a5d2-6f4a59387121 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:38:41.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3668" for this suite.
STEP: Destroying namespace "secret-namespace-3080" for this suite.

• [SLOW TEST:10.548 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":209,"skipped":3368,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:38:41.427: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan 28 22:38:48.642: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:38:48.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-7416" for this suite.

• [SLOW TEST:7.257 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":210,"skipped":3399,"failed":0}
SSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:38:48.684: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap that has name configmap-test-emptyKey-aa9fef22-5a48-4baf-8fc1-11684f4c78b0
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:38:48.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7583" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":211,"skipped":3402,"failed":0}
SSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:38:48.896: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating server pod server in namespace prestop-4730
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-4730
STEP: Deleting pre-stop pod
Jan 28 22:39:08.221: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:39:08.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-4730" for this suite.

• [SLOW TEST:19.368 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":278,"completed":212,"skipped":3406,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:39:08.265: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Jan 28 22:39:08.374: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 28 22:39:08.385: INFO: Waiting for terminating namespaces to be deleted...
Jan 28 22:39:08.387: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Jan 28 22:39:08.399: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Jan 28 22:39:08.399: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 28 22:39:08.399: INFO: tester from prestop-4730 started at 2020-01-28 22:38:57 +0000 UTC (1 container statuses recorded)
Jan 28 22:39:08.399: INFO: 	Container tester ready: true, restart count 0
Jan 28 22:39:08.399: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Jan 28 22:39:08.399: INFO: 	Container weave ready: true, restart count 1
Jan 28 22:39:08.399: INFO: 	Container weave-npc ready: true, restart count 0
Jan 28 22:39:08.399: INFO: server from prestop-4730 started at 2020-01-28 22:38:49 +0000 UTC (1 container statuses recorded)
Jan 28 22:39:08.399: INFO: 	Container server ready: true, restart count 0
Jan 28 22:39:08.399: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Jan 28 22:39:08.408: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 28 22:39:08.408: INFO: 	Container coredns ready: true, restart count 0
Jan 28 22:39:08.408: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 28 22:39:08.408: INFO: 	Container coredns ready: true, restart count 0
Jan 28 22:39:08.408: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 28 22:39:08.408: INFO: 	Container kube-controller-manager ready: true, restart count 3
Jan 28 22:39:08.408: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Jan 28 22:39:08.408: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 28 22:39:08.408: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Jan 28 22:39:08.408: INFO: 	Container weave ready: true, restart count 0
Jan 28 22:39:08.408: INFO: 	Container weave-npc ready: true, restart count 0
Jan 28 22:39:08.408: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 28 22:39:08.408: INFO: 	Container kube-scheduler ready: true, restart count 4
Jan 28 22:39:08.408: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 28 22:39:08.408: INFO: 	Container kube-apiserver ready: true, restart count 1
Jan 28 22:39:08.408: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 28 22:39:08.408: INFO: 	Container etcd ready: true, restart count 1
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15ee2e0c0818e9e9], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:39:09.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-9923" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","total":278,"completed":213,"skipped":3421,"failed":0}
SSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:39:09.548: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-1612
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 28 22:39:09.738: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 28 22:39:46.011: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-1612 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 28 22:39:46.011: INFO: >>> kubeConfig: /root/.kube/config
I0128 22:39:46.057013       8 log.go:172] (0xc0026518c0) (0xc001423a40) Create stream
I0128 22:39:46.057454       8 log.go:172] (0xc0026518c0) (0xc001423a40) Stream added, broadcasting: 1
I0128 22:39:46.065332       8 log.go:172] (0xc0026518c0) Reply frame received for 1
I0128 22:39:46.065394       8 log.go:172] (0xc0026518c0) (0xc001423ae0) Create stream
I0128 22:39:46.065412       8 log.go:172] (0xc0026518c0) (0xc001423ae0) Stream added, broadcasting: 3
I0128 22:39:46.068012       8 log.go:172] (0xc0026518c0) Reply frame received for 3
I0128 22:39:46.068064       8 log.go:172] (0xc0026518c0) (0xc0011bc1e0) Create stream
I0128 22:39:46.068075       8 log.go:172] (0xc0026518c0) (0xc0011bc1e0) Stream added, broadcasting: 5
I0128 22:39:46.070750       8 log.go:172] (0xc0026518c0) Reply frame received for 5
I0128 22:39:46.187596       8 log.go:172] (0xc0026518c0) Data frame received for 3
I0128 22:39:46.187715       8 log.go:172] (0xc001423ae0) (3) Data frame handling
I0128 22:39:46.187767       8 log.go:172] (0xc001423ae0) (3) Data frame sent
I0128 22:39:46.257358       8 log.go:172] (0xc0026518c0) (0xc001423ae0) Stream removed, broadcasting: 3
I0128 22:39:46.257987       8 log.go:172] (0xc0026518c0) Data frame received for 1
I0128 22:39:46.258177       8 log.go:172] (0xc001423a40) (1) Data frame handling
I0128 22:39:46.258212       8 log.go:172] (0xc001423a40) (1) Data frame sent
I0128 22:39:46.258237       8 log.go:172] (0xc0026518c0) (0xc001423a40) Stream removed, broadcasting: 1
I0128 22:39:46.258771       8 log.go:172] (0xc0026518c0) (0xc0011bc1e0) Stream removed, broadcasting: 5
I0128 22:39:46.258974       8 log.go:172] (0xc0026518c0) (0xc001423a40) Stream removed, broadcasting: 1
I0128 22:39:46.259017       8 log.go:172] (0xc0026518c0) (0xc001423ae0) Stream removed, broadcasting: 3
I0128 22:39:46.259063       8 log.go:172] (0xc0026518c0) (0xc0011bc1e0) Stream removed, broadcasting: 5
I0128 22:39:46.259444       8 log.go:172] (0xc0026518c0) Go away received
Jan 28 22:39:46.259: INFO: Waiting for responses: map[]
Jan 28 22:39:46.265: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-1612 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 28 22:39:46.265: INFO: >>> kubeConfig: /root/.kube/config
I0128 22:39:46.309564       8 log.go:172] (0xc0029dc0b0) (0xc00253e780) Create stream
I0128 22:39:46.309690       8 log.go:172] (0xc0029dc0b0) (0xc00253e780) Stream added, broadcasting: 1
I0128 22:39:46.317304       8 log.go:172] (0xc0029dc0b0) Reply frame received for 1
I0128 22:39:46.317470       8 log.go:172] (0xc0029dc0b0) (0xc001ee0aa0) Create stream
I0128 22:39:46.317519       8 log.go:172] (0xc0029dc0b0) (0xc001ee0aa0) Stream added, broadcasting: 3
I0128 22:39:46.319550       8 log.go:172] (0xc0029dc0b0) Reply frame received for 3
I0128 22:39:46.319603       8 log.go:172] (0xc0029dc0b0) (0xc0024ca0a0) Create stream
I0128 22:39:46.319617       8 log.go:172] (0xc0029dc0b0) (0xc0024ca0a0) Stream added, broadcasting: 5
I0128 22:39:46.320956       8 log.go:172] (0xc0029dc0b0) Reply frame received for 5
I0128 22:39:46.421105       8 log.go:172] (0xc0029dc0b0) Data frame received for 3
I0128 22:39:46.421177       8 log.go:172] (0xc001ee0aa0) (3) Data frame handling
I0128 22:39:46.421210       8 log.go:172] (0xc001ee0aa0) (3) Data frame sent
I0128 22:39:46.504058       8 log.go:172] (0xc0029dc0b0) Data frame received for 1
I0128 22:39:46.504252       8 log.go:172] (0xc0029dc0b0) (0xc001ee0aa0) Stream removed, broadcasting: 3
I0128 22:39:46.504523       8 log.go:172] (0xc00253e780) (1) Data frame handling
I0128 22:39:46.504685       8 log.go:172] (0xc00253e780) (1) Data frame sent
I0128 22:39:46.504794       8 log.go:172] (0xc0029dc0b0) (0xc0024ca0a0) Stream removed, broadcasting: 5
I0128 22:39:46.504860       8 log.go:172] (0xc0029dc0b0) (0xc00253e780) Stream removed, broadcasting: 1
I0128 22:39:46.504879       8 log.go:172] (0xc0029dc0b0) Go away received
I0128 22:39:46.505908       8 log.go:172] (0xc0029dc0b0) (0xc00253e780) Stream removed, broadcasting: 1
I0128 22:39:46.506137       8 log.go:172] (0xc0029dc0b0) (0xc001ee0aa0) Stream removed, broadcasting: 3
I0128 22:39:46.506175       8 log.go:172] (0xc0029dc0b0) (0xc0024ca0a0) Stream removed, broadcasting: 5
Jan 28 22:39:46.506: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:39:46.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-1612" for this suite.

• [SLOW TEST:36.972 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":214,"skipped":3431,"failed":0}
SSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:39:46.521: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:178
[It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:39:46.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2156" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":215,"skipped":3437,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:39:46.707: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 28 22:39:47.497: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 28 22:39:49.980: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847987, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847987, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847987, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847987, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 22:39:51.989: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847987, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847987, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847987, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847987, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 22:39:54.035: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847987, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847987, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847987, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847987, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 22:39:56.187: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847987, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847987, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847987, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847987, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 22:39:58.004: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847987, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847987, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847987, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847987, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 22:39:59.989: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847987, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847987, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847987, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715847987, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 28 22:40:03.156: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the mutating pod webhook via the AdmissionRegistration API
STEP: create a pod that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:40:03.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2319" for this suite.
STEP: Destroying namespace "webhook-2319-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:16.765 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":216,"skipped":3456,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:40:03.473: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 28 22:40:03.614: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d217706e-d21b-4227-8282-ee1a2ebae954" in namespace "projected-804" to be "success or failure"
Jan 28 22:40:03.624: INFO: Pod "downwardapi-volume-d217706e-d21b-4227-8282-ee1a2ebae954": Phase="Pending", Reason="", readiness=false. Elapsed: 9.970796ms
Jan 28 22:40:05.633: INFO: Pod "downwardapi-volume-d217706e-d21b-4227-8282-ee1a2ebae954": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018369145s
Jan 28 22:40:07.639: INFO: Pod "downwardapi-volume-d217706e-d21b-4227-8282-ee1a2ebae954": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024242311s
Jan 28 22:40:09.654: INFO: Pod "downwardapi-volume-d217706e-d21b-4227-8282-ee1a2ebae954": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039920939s
Jan 28 22:40:11.663: INFO: Pod "downwardapi-volume-d217706e-d21b-4227-8282-ee1a2ebae954": Phase="Pending", Reason="", readiness=false. Elapsed: 8.049026141s
Jan 28 22:40:13.674: INFO: Pod "downwardapi-volume-d217706e-d21b-4227-8282-ee1a2ebae954": Phase="Pending", Reason="", readiness=false. Elapsed: 10.0593175s
Jan 28 22:40:15.682: INFO: Pod "downwardapi-volume-d217706e-d21b-4227-8282-ee1a2ebae954": Phase="Pending", Reason="", readiness=false. Elapsed: 12.067655803s
Jan 28 22:40:17.689: INFO: Pod "downwardapi-volume-d217706e-d21b-4227-8282-ee1a2ebae954": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.074196566s
STEP: Saw pod success
Jan 28 22:40:17.689: INFO: Pod "downwardapi-volume-d217706e-d21b-4227-8282-ee1a2ebae954" satisfied condition "success or failure"
Jan 28 22:40:17.694: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-d217706e-d21b-4227-8282-ee1a2ebae954 container client-container: 
STEP: delete the pod
Jan 28 22:40:17.766: INFO: Waiting for pod downwardapi-volume-d217706e-d21b-4227-8282-ee1a2ebae954 to disappear
Jan 28 22:40:17.775: INFO: Pod downwardapi-volume-d217706e-d21b-4227-8282-ee1a2ebae954 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:40:17.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-804" for this suite.

• [SLOW TEST:14.320 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":217,"skipped":3478,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:40:17.795: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-4g4cg in namespace proxy-6309
I0128 22:40:18.005085       8 runners.go:189] Created replication controller with name: proxy-service-4g4cg, namespace: proxy-6309, replica count: 1
I0128 22:40:19.055970       8 runners.go:189] proxy-service-4g4cg Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0128 22:40:20.056347       8 runners.go:189] proxy-service-4g4cg Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0128 22:40:21.056764       8 runners.go:189] proxy-service-4g4cg Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0128 22:40:22.057317       8 runners.go:189] proxy-service-4g4cg Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0128 22:40:23.058077       8 runners.go:189] proxy-service-4g4cg Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0128 22:40:24.058806       8 runners.go:189] proxy-service-4g4cg Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0128 22:40:25.059268       8 runners.go:189] proxy-service-4g4cg Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0128 22:40:26.059663       8 runners.go:189] proxy-service-4g4cg Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0128 22:40:27.060165       8 runners.go:189] proxy-service-4g4cg Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0128 22:40:28.060561       8 runners.go:189] proxy-service-4g4cg Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 28 22:40:28.066: INFO: setup took 10.152007306s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Jan 28 22:40:28.092: INFO: (0) /api/v1/namespaces/proxy-6309/pods/http:proxy-service-4g4cg-fxmg6:162/proxy/: bar (200; 24.406072ms)
Jan 28 22:40:28.092: INFO: (0) /api/v1/namespaces/proxy-6309/services/proxy-service-4g4cg:portname2/proxy/: bar (200; 25.510393ms)
Jan 28 22:40:28.093: INFO: (0) /api/v1/namespaces/proxy-6309/pods/proxy-service-4g4cg-fxmg6:162/proxy/: bar (200; 25.796493ms)
Jan 28 22:40:28.093: INFO: (0) /api/v1/namespaces/proxy-6309/services/http:proxy-service-4g4cg:portname2/proxy/: bar (200; 25.430587ms)
Jan 28 22:40:28.094: INFO: (0) /api/v1/namespaces/proxy-6309/pods/http:proxy-service-4g4cg-fxmg6:1080/proxy/: ... (200; 27.661429ms)
Jan 28 22:40:28.094: INFO: (0) /api/v1/namespaces/proxy-6309/pods/proxy-service-4g4cg-fxmg6:160/proxy/: foo (200; 26.788471ms)
Jan 28 22:40:28.095: INFO: (0) /api/v1/namespaces/proxy-6309/pods/proxy-service-4g4cg-fxmg6:1080/proxy/: test<... (200; 27.961317ms)
Jan 28 22:40:28.095: INFO: (0) /api/v1/namespaces/proxy-6309/pods/http:proxy-service-4g4cg-fxmg6:160/proxy/: foo (200; 27.14608ms)
Jan 28 22:40:28.095: INFO: (0) /api/v1/namespaces/proxy-6309/services/proxy-service-4g4cg:portname1/proxy/: foo (200; 28.911384ms)
Jan 28 22:40:28.100: INFO: (0) /api/v1/namespaces/proxy-6309/pods/proxy-service-4g4cg-fxmg6/proxy/: test (200; 32.32199ms)
Jan 28 22:40:28.101: INFO: (0) /api/v1/namespaces/proxy-6309/services/http:proxy-service-4g4cg:portname1/proxy/: foo (200; 33.747636ms)
Jan 28 22:40:28.104: INFO: (0) /api/v1/namespaces/proxy-6309/pods/https:proxy-service-4g4cg-fxmg6:443/proxy/: ... (200; 18.840583ms)
Jan 28 22:40:28.131: INFO: (1) /api/v1/namespaces/proxy-6309/pods/https:proxy-service-4g4cg-fxmg6:462/proxy/: tls qux (200; 19.856059ms)
Jan 28 22:40:28.132: INFO: (1) /api/v1/namespaces/proxy-6309/pods/https:proxy-service-4g4cg-fxmg6:460/proxy/: tls baz (200; 20.621052ms)
Jan 28 22:40:28.132: INFO: (1) /api/v1/namespaces/proxy-6309/pods/https:proxy-service-4g4cg-fxmg6:443/proxy/: test (200; 20.739988ms)
Jan 28 22:40:28.133: INFO: (1) /api/v1/namespaces/proxy-6309/pods/proxy-service-4g4cg-fxmg6:1080/proxy/: test<... (200; 21.812375ms)
Jan 28 22:40:28.134: INFO: (1) /api/v1/namespaces/proxy-6309/pods/proxy-service-4g4cg-fxmg6:162/proxy/: bar (200; 23.237797ms)
Jan 28 22:40:28.136: INFO: (1) /api/v1/namespaces/proxy-6309/services/http:proxy-service-4g4cg:portname2/proxy/: bar (200; 24.287746ms)
Jan 28 22:40:28.136: INFO: (1) /api/v1/namespaces/proxy-6309/services/proxy-service-4g4cg:portname1/proxy/: foo (200; 24.439549ms)
Jan 28 22:40:28.136: INFO: (1) /api/v1/namespaces/proxy-6309/services/https:proxy-service-4g4cg:tlsportname2/proxy/: tls qux (200; 25.291568ms)
Jan 28 22:40:28.137: INFO: (1) /api/v1/namespaces/proxy-6309/services/proxy-service-4g4cg:portname2/proxy/: bar (200; 26.225727ms)
Jan 28 22:40:28.138: INFO: (1) /api/v1/namespaces/proxy-6309/services/https:proxy-service-4g4cg:tlsportname1/proxy/: tls baz (200; 26.866018ms)
Jan 28 22:40:28.138: INFO: (1) /api/v1/namespaces/proxy-6309/services/http:proxy-service-4g4cg:portname1/proxy/: foo (200; 26.715135ms)
Jan 28 22:40:28.153: INFO: (2) /api/v1/namespaces/proxy-6309/pods/http:proxy-service-4g4cg-fxmg6:162/proxy/: bar (200; 13.706773ms)
Jan 28 22:40:28.153: INFO: (2) /api/v1/namespaces/proxy-6309/services/http:proxy-service-4g4cg:portname1/proxy/: foo (200; 14.689608ms)
Jan 28 22:40:28.155: INFO: (2) /api/v1/namespaces/proxy-6309/pods/proxy-service-4g4cg-fxmg6:160/proxy/: foo (200; 16.724679ms)
Jan 28 22:40:28.157: INFO: (2) /api/v1/namespaces/proxy-6309/pods/https:proxy-service-4g4cg-fxmg6:443/proxy/: ... (200; 22.79252ms)
Jan 28 22:40:28.162: INFO: (2) /api/v1/namespaces/proxy-6309/pods/https:proxy-service-4g4cg-fxmg6:462/proxy/: tls qux (200; 23.270946ms)
Jan 28 22:40:28.162: INFO: (2) /api/v1/namespaces/proxy-6309/pods/https:proxy-service-4g4cg-fxmg6:460/proxy/: tls baz (200; 23.148683ms)
Jan 28 22:40:28.164: INFO: (2) /api/v1/namespaces/proxy-6309/pods/http:proxy-service-4g4cg-fxmg6:160/proxy/: foo (200; 25.64284ms)
Jan 28 22:40:28.164: INFO: (2) /api/v1/namespaces/proxy-6309/pods/proxy-service-4g4cg-fxmg6:1080/proxy/: test<... (200; 25.222861ms)
Jan 28 22:40:28.165: INFO: (2) /api/v1/namespaces/proxy-6309/pods/proxy-service-4g4cg-fxmg6/proxy/: test (200; 26.211949ms)
Jan 28 22:40:28.180: INFO: (3) /api/v1/namespaces/proxy-6309/pods/http:proxy-service-4g4cg-fxmg6:162/proxy/: bar (200; 15.242345ms)
Jan 28 22:40:28.180: INFO: (3) /api/v1/namespaces/proxy-6309/pods/proxy-service-4g4cg-fxmg6:1080/proxy/: test<... (200; 15.768358ms)
Jan 28 22:40:28.181: INFO: (3) /api/v1/namespaces/proxy-6309/pods/proxy-service-4g4cg-fxmg6/proxy/: test (200; 15.506966ms)
Jan 28 22:40:28.181: INFO: (3) /api/v1/namespaces/proxy-6309/pods/proxy-service-4g4cg-fxmg6:162/proxy/: bar (200; 16.106287ms)
Jan 28 22:40:28.181: INFO: (3) /api/v1/namespaces/proxy-6309/pods/proxy-service-4g4cg-fxmg6:160/proxy/: foo (200; 16.236721ms)
Jan 28 22:40:28.182: INFO: (3) /api/v1/namespaces/proxy-6309/pods/https:proxy-service-4g4cg-fxmg6:443/proxy/: ... (200; 18.246781ms)
Jan 28 22:40:28.187: INFO: (3) /api/v1/namespaces/proxy-6309/services/http:proxy-service-4g4cg:portname2/proxy/: bar (200; 22.008755ms)
Jan 28 22:40:28.192: INFO: (3) /api/v1/namespaces/proxy-6309/services/http:proxy-service-4g4cg:portname1/proxy/: foo (200; 27.515154ms)
Jan 28 22:40:28.192: INFO: (3) /api/v1/namespaces/proxy-6309/services/proxy-service-4g4cg:portname2/proxy/: bar (200; 27.105643ms)
Jan 28 22:40:28.192: INFO: (3) /api/v1/namespaces/proxy-6309/services/proxy-service-4g4cg:portname1/proxy/: foo (200; 27.147293ms)
Jan 28 22:40:28.205: INFO: (4) /api/v1/namespaces/proxy-6309/pods/http:proxy-service-4g4cg-fxmg6:162/proxy/: bar (200; 11.868772ms)
Jan 28 22:40:28.214: INFO: (4) /api/v1/namespaces/proxy-6309/pods/http:proxy-service-4g4cg-fxmg6:1080/proxy/: ... (200; 21.247889ms)
Jan 28 22:40:28.214: INFO: (4) /api/v1/namespaces/proxy-6309/pods/proxy-service-4g4cg-fxmg6:1080/proxy/: test<... (200; 21.545091ms)
Jan 28 22:40:28.215: INFO: (4) /api/v1/namespaces/proxy-6309/pods/https:proxy-service-4g4cg-fxmg6:460/proxy/: tls baz (200; 22.124079ms)
Jan 28 22:40:28.215: INFO: (4) /api/v1/namespaces/proxy-6309/pods/https:proxy-service-4g4cg-fxmg6:462/proxy/: tls qux (200; 21.862259ms)
Jan 28 22:40:28.220: INFO: (4) /api/v1/namespaces/proxy-6309/services/https:proxy-service-4g4cg:tlsportname1/proxy/: tls baz (200; 26.70794ms)
Jan 28 22:40:28.220: INFO: (4) /api/v1/namespaces/proxy-6309/pods/proxy-service-4g4cg-fxmg6/proxy/: test (200; 26.928161ms)
Jan 28 22:40:28.220: INFO: (4) /api/v1/namespaces/proxy-6309/services/proxy-service-4g4cg:portname1/proxy/: foo (200; 27.184582ms)
Jan 28 22:40:28.220: INFO: (4) /api/v1/namespaces/proxy-6309/services/http:proxy-service-4g4cg:portname2/proxy/: bar (200; 26.871487ms)
Jan 28 22:40:28.220: INFO: (4) /api/v1/namespaces/proxy-6309/services/http:proxy-service-4g4cg:portname1/proxy/: foo (200; 26.401734ms)
Jan 28 22:40:28.220: INFO: (4) /api/v1/namespaces/proxy-6309/services/proxy-service-4g4cg:portname2/proxy/: bar (200; 26.859644ms)
Jan 28 22:40:28.220: INFO: (4) /api/v1/namespaces/proxy-6309/pods/proxy-service-4g4cg-fxmg6:160/proxy/: foo (200; 27.456508ms)
Jan 28 22:40:28.220: INFO: (4) /api/v1/namespaces/proxy-6309/pods/proxy-service-4g4cg-fxmg6:162/proxy/: bar (200; 27.349775ms)
Jan 28 22:40:28.220: INFO: (4) /api/v1/namespaces/proxy-6309/pods/https:proxy-service-4g4cg-fxmg6:443/proxy/: test<... (200; 7.350671ms)
Jan 28 22:40:28.230: INFO: (5) /api/v1/namespaces/proxy-6309/pods/proxy-service-4g4cg-fxmg6/proxy/: test (200; 7.46152ms)
Jan 28 22:40:28.231: INFO: (5) /api/v1/namespaces/proxy-6309/services/proxy-service-4g4cg:portname1/proxy/: foo (200; 9.37392ms)
Jan 28 22:40:28.232: INFO: (5) /api/v1/namespaces/proxy-6309/pods/proxy-service-4g4cg-fxmg6:160/proxy/: foo (200; 9.347385ms)
Jan 28 22:40:28.232: INFO: (5) /api/v1/namespaces/proxy-6309/services/http:proxy-service-4g4cg:portname2/proxy/: bar (200; 9.949667ms)
Jan 28 22:40:28.232: INFO: (5) /api/v1/namespaces/proxy-6309/services/http:proxy-service-4g4cg:portname1/proxy/: foo (200; 10.217658ms)
Jan 28 22:40:28.232: INFO: (5) /api/v1/namespaces/proxy-6309/pods/proxy-service-4g4cg-fxmg6:162/proxy/: bar (200; 10.425375ms)
Jan 28 22:40:28.233: INFO: (5) /api/v1/namespaces/proxy-6309/pods/http:proxy-service-4g4cg-fxmg6:1080/proxy/: ... (200; 10.67432ms)
Jan 28 22:40:28.233: INFO: (5) /api/v1/namespaces/proxy-6309/services/https:proxy-service-4g4cg:tlsportname1/proxy/: tls baz (200; 10.860317ms)
Jan 28 22:40:28.233: INFO: (5) /api/v1/namespaces/proxy-6309/pods/http:proxy-service-4g4cg-fxmg6:162/proxy/: bar (200; 10.937991ms)
Jan 28 22:40:28.233: INFO: (5) /api/v1/namespaces/proxy-6309/pods/https:proxy-service-4g4cg-fxmg6:462/proxy/: tls qux (200; 11.148104ms)
Jan 28 22:40:28.233: INFO: (5) /api/v1/namespaces/proxy-6309/services/proxy-service-4g4cg:portname2/proxy/: bar (200; 11.535078ms)
Jan 28 22:40:28.233: INFO: (5) /api/v1/namespaces/proxy-6309/pods/https:proxy-service-4g4cg-fxmg6:443/proxy/: ... (200; 10.416865ms)
Jan 28 22:40:28.246: INFO: (6) /api/v1/namespaces/proxy-6309/services/proxy-service-4g4cg:portname2/proxy/: bar (200; 10.602478ms)
Jan 28 22:40:28.246: INFO: (6) /api/v1/namespaces/proxy-6309/pods/https:proxy-service-4g4cg-fxmg6:460/proxy/: tls baz (200; 10.614423ms)
Jan 28 22:40:28.246: INFO: (6) /api/v1/namespaces/proxy-6309/services/https:proxy-service-4g4cg:tlsportname1/proxy/: tls baz (200; 10.778037ms)
Jan 28 22:40:28.247: INFO: (6) /api/v1/namespaces/proxy-6309/services/https:proxy-service-4g4cg:tlsportname2/proxy/: tls qux (200; 11.398756ms)
Jan 28 22:40:28.247: INFO: (6) /api/v1/namespaces/proxy-6309/pods/proxy-service-4g4cg-fxmg6:162/proxy/: bar (200; 11.710176ms)
Jan 28 22:40:28.247: INFO: (6) /api/v1/namespaces/proxy-6309/pods/proxy-service-4g4cg-fxmg6:1080/proxy/: test<... (200; 11.516911ms)
Jan 28 22:40:28.248: INFO: (6) /api/v1/namespaces/proxy-6309/pods/proxy-service-4g4cg-fxmg6/proxy/: test (200; 11.85517ms)
Jan 28 22:40:28.248: INFO: (6) /api/v1/namespaces/proxy-6309/pods/https:proxy-service-4g4cg-fxmg6:443/proxy/: ... (200; 9.464683ms)
Jan 28 22:40:28.262: INFO: (7) /api/v1/namespaces/proxy-6309/pods/https:proxy-service-4g4cg-fxmg6:462/proxy/: tls qux (200; 11.306495ms)
Jan 28 22:40:28.262: INFO: (7) /api/v1/namespaces/proxy-6309/pods/proxy-service-4g4cg-fxmg6/proxy/: test (200; 11.764779ms)
Jan 28 22:40:28.265: INFO: (7) /api/v1/namespaces/proxy-6309/pods/http:proxy-service-4g4cg-fxmg6:162/proxy/: bar (200; 13.876254ms)
Jan 28 22:40:28.265: INFO: (7) /api/v1/namespaces/proxy-6309/pods/http:proxy-service-4g4cg-fxmg6:160/proxy/: foo (200; 13.873149ms)
Jan 28 22:40:28.265: INFO: (7) /api/v1/namespaces/proxy-6309/pods/proxy-service-4g4cg-fxmg6:1080/proxy/: test<... (200; 14.110368ms)
Jan 28 22:40:28.268: INFO: (7) /api/v1/namespaces/proxy-6309/services/http:proxy-service-4g4cg:portname1/proxy/: foo (200; 17.694026ms)
Jan 28 22:40:28.268: INFO: (7) /api/v1/namespaces/proxy-6309/services/http:proxy-service-4g4cg:portname2/proxy/: bar (200; 17.835161ms)
Jan 28 22:40:28.270: INFO: (7) /api/v1/namespaces/proxy-6309/services/https:proxy-service-4g4cg:tlsportname2/proxy/: tls qux (200; 18.887466ms)
Jan 28 22:40:28.270: INFO: (7) /api/v1/namespaces/proxy-6309/services/https:proxy-service-4g4cg:tlsportname1/proxy/: tls baz (200; 19.168719ms)
Jan 28 22:40:28.270: INFO: (7) /api/v1/namespaces/proxy-6309/services/proxy-service-4g4cg:portname1/proxy/: foo (200; 18.956831ms)
Jan 28 22:40:28.270: INFO: (7) /api/v1/namespaces/proxy-6309/services/proxy-service-4g4cg:portname2/proxy/: bar (200; 19.684785ms)
Jan 28 22:40:28.281: INFO: (8) /api/v1/namespaces/proxy-6309/pods/proxy-service-4g4cg-fxmg6:160/proxy/: foo (200; 10.47624ms)
Jan 28 22:40:28.281: INFO: (8) /api/v1/namespaces/proxy-6309/pods/http:proxy-service-4g4cg-fxmg6:1080/proxy/: ... (200; 10.566297ms)
Jan 28 22:40:28.281: INFO: (8) /api/v1/namespaces/proxy-6309/pods/proxy-service-4g4cg-fxmg6/proxy/: test (200; 10.499898ms)
Jan 28 22:40:28.282: INFO: (8) /api/v1/namespaces/proxy-6309/pods/http:proxy-service-4g4cg-fxmg6:160/proxy/: foo (200; 11.723995ms)
Jan 28 22:40:28.282: INFO: (8) /api/v1/namespaces/proxy-6309/pods/proxy-service-4g4cg-fxmg6:162/proxy/: bar (200; 12.133837ms)
Jan 28 22:40:28.283: INFO: (8) /api/v1/namespaces/proxy-6309/pods/proxy-service-4g4cg-fxmg6:1080/proxy/: test<... (200; 12.037297ms)
Jan 28 22:40:28.283: INFO: (8) /api/v1/namespaces/proxy-6309/services/https:proxy-service-4g4cg:tlsportname2/proxy/: tls qux (200; 12.264577ms)
Jan 28 22:40:28.283: INFO: (8) /api/v1/namespaces/proxy-6309/pods/http:proxy-service-4g4cg-fxmg6:162/proxy/: bar (200; 12.741698ms)
Jan 28 22:40:28.284: INFO: (8) /api/v1/namespaces/proxy-6309/services/proxy-service-4g4cg:portname1/proxy/: foo (200; 13.8418ms)
Jan 28 22:40:28.284: INFO: (8) /api/v1/namespaces/proxy-6309/pods/https:proxy-service-4g4cg-fxmg6:443/proxy/: ... (200; 13.922867ms)
Jan 28 22:40:28.299: INFO: (9) /api/v1/namespaces/proxy-6309/pods/proxy-service-4g4cg-fxmg6:1080/proxy/: test<... (200; 13.98777ms)
Jan 28 22:40:28.299: INFO: (9) /api/v1/namespaces/proxy-6309/services/proxy-service-4g4cg:portname2/proxy/: bar (200; 14.324531ms)
Jan 28 22:40:28.299: INFO: (9) /api/v1/namespaces/proxy-6309/pods/http:proxy-service-4g4cg-fxmg6:160/proxy/: foo (200; 13.905983ms)
Jan 28 22:40:28.299: INFO: (9) /api/v1/namespaces/proxy-6309/pods/https:proxy-service-4g4cg-fxmg6:462/proxy/: tls qux (200; 14.076403ms)
Jan 28 22:40:28.299: INFO: (9) /api/v1/namespaces/proxy-6309/services/https:proxy-service-4g4cg:tlsportname2/proxy/: tls qux (200; 14.392834ms)
Jan 28 22:40:28.300: INFO: (9) /api/v1/namespaces/proxy-6309/pods/http:proxy-service-4g4cg-fxmg6:162/proxy/: bar (200; 14.918907ms)
Jan 28 22:40:28.300: INFO: (9) /api/v1/namespaces/proxy-6309/pods/proxy-service-4g4cg-fxmg6:162/proxy/: bar (200; 14.637918ms)
Jan 28 22:40:28.300: INFO: (9) /api/v1/namespaces/proxy-6309/services/http:proxy-service-4g4cg:portname1/proxy/: foo (200; 15.034202ms)
Jan 28 22:40:28.300: INFO: (9) /api/v1/namespaces/proxy-6309/pods/https:proxy-service-4g4cg-fxmg6:460/proxy/: tls baz (200; 15.399688ms)
Jan 28 22:40:28.300: INFO: (9) /api/v1/namespaces/proxy-6309/pods/proxy-service-4g4cg-fxmg6/proxy/: test (200; 15.022483ms)
Jan 28 22:40:28.301: INFO: (9) /api/v1/namespaces/proxy-6309/pods/https:proxy-service-4g4cg-fxmg6:443/proxy/: test<... (200; 8.728215ms)
Jan 28 22:40:28.310: INFO: (10) /api/v1/namespaces/proxy-6309/pods/http:proxy-service-4g4cg-fxmg6:1080/proxy/: ... (200; 9.586394ms)
Jan 28 22:40:28.311: INFO: (10) /api/v1/namespaces/proxy-6309/pods/https:proxy-service-4g4cg-fxmg6:462/proxy/: tls qux (200; 10.506261ms)
Jan 28 22:40:28.312: INFO: (10) /api/v1/namespaces/proxy-6309/pods/proxy-service-4g4cg-fxmg6/proxy/: test (200; 10.8923ms)
Jan 28 22:40:28.312: INFO: (10) /api/v1/namespaces/proxy-6309/pods/https:proxy-service-4g4cg-fxmg6:443/proxy/: test (200; 8.328408ms)
Jan 28 22:40:28.323: INFO: (11) /api/v1/namespaces/proxy-6309/services/proxy-service-4g4cg:portname1/proxy/: foo (200; 8.11916ms)
Jan 28 22:40:28.323: INFO: (11) /api/v1/namespaces/proxy-6309/pods/proxy-service-4g4cg-fxmg6:1080/proxy/: test<... (200; 8.209953ms)
Jan 28 22:40:28.323: INFO: (11) /api/v1/namespaces/proxy-6309/services/https:proxy-service-4g4cg:tlsportname1/proxy/: tls baz (200; 8.473636ms)
Jan 28 22:40:28.323: INFO: (11) /api/v1/namespaces/proxy-6309/pods/https:proxy-service-4g4cg-fxmg6:443/proxy/: ... (200; 13.420404ms)
Jan 28 22:40:28.328: INFO: (11) /api/v1/namespaces/proxy-6309/pods/proxy-service-4g4cg-fxmg6:162/proxy/: bar (200; 13.708209ms)
Jan 28 22:40:28.328: INFO: (11) /api/v1/namespaces/proxy-6309/services/proxy-service-4g4cg:portname2/proxy/: bar (200; 14.034745ms)
Jan 28 22:40:28.328: INFO: (11) /api/v1/namespaces/proxy-6309/services/http:proxy-service-4g4cg:portname1/proxy/: foo (200; 14.085431ms)
Jan 28 22:40:28.328: INFO: (11) /api/v1/namespaces/proxy-6309/pods/http:proxy-service-4g4cg-fxmg6:162/proxy/: bar (200; 13.746486ms)
Jan 28 22:40:28.328: INFO: (11) /api/v1/namespaces/proxy-6309/pods/http:proxy-service-4g4cg-fxmg6:160/proxy/: foo (200; 14.024491ms)
Jan 28 22:40:28.328: INFO: (11) /api/v1/namespaces/proxy-6309/services/https:proxy-service-4g4cg:tlsportname2/proxy/: tls qux (200; 13.78107ms)
Jan 28 22:40:28.328: INFO: (11) /api/v1/namespaces/proxy-6309/pods/proxy-service-4g4cg-fxmg6:160/proxy/: foo (200; 13.779858ms)
Jan 28 22:40:28.328: INFO: (11) /api/v1/namespaces/proxy-6309/pods/https:proxy-service-4g4cg-fxmg6:460/proxy/: tls baz (200; 13.917785ms)
Jan 28 22:40:28.328: INFO: (11) /api/v1/namespaces/proxy-6309/services/http:proxy-service-4g4cg:portname2/proxy/: bar (200; 13.848969ms)
Jan 28 22:40:28.336: INFO: (12) /api/v1/namespaces/proxy-6309/pods/proxy-service-4g4cg-fxmg6/proxy/: test (200; 7.688038ms)
Jan 28 22:40:28.337: INFO: (12) /api/v1/namespaces/proxy-6309/pods/https:proxy-service-4g4cg-fxmg6:462/proxy/: tls qux (200; 8.31616ms)
Jan 28 22:40:28.337: INFO: (12) /api/v1/namespaces/proxy-6309/pods/http:proxy-service-4g4cg-fxmg6:1080/proxy/: ... (200; 8.444882ms)
Jan 28 22:40:28.340: INFO: (12) /api/v1/namespaces/proxy-6309/pods/https:proxy-service-4g4cg-fxmg6:460/proxy/: tls baz (200; 11.69934ms)
Jan 28 22:40:28.341: INFO: (12) /api/v1/namespaces/proxy-6309/services/proxy-service-4g4cg:portname1/proxy/: foo (200; 12.120343ms)
Jan 28 22:40:28.341: INFO: (12) /api/v1/namespaces/proxy-6309/services/https:proxy-service-4g4cg:tlsportname1/proxy/: tls baz (200; 12.316166ms)
Jan 28 22:40:28.344: INFO: (12) /api/v1/namespaces/proxy-6309/pods/https:proxy-service-4g4cg-fxmg6:443/proxy/: test<... (200; 16.687338ms)
Jan 28 22:40:28.345: INFO: (12) /api/v1/namespaces/proxy-6309/pods/http:proxy-service-4g4cg-fxmg6:160/proxy/: foo (200; 16.815778ms)
Jan 28 22:40:28.346: INFO: (12) /api/v1/namespaces/proxy-6309/services/http:proxy-service-4g4cg:portname1/proxy/: foo (200; 17.658324ms)
Jan 28 22:40:28.346: INFO: (12) /api/v1/namespaces/proxy-6309/pods/http:proxy-service-4g4cg-fxmg6:162/proxy/: bar (200; 17.521553ms)
Jan 28 22:40:28.348: INFO: (12) /api/v1/namespaces/proxy-6309/services/proxy-service-4g4cg:portname2/proxy/: bar (200; 19.250917ms)
Jan 28 22:40:28.349: INFO: (12) /api/v1/namespaces/proxy-6309/pods/proxy-service-4g4cg-fxmg6:162/proxy/: bar (200; 20.4297ms)
Jan 28 22:40:28.355: INFO: (13) /api/v1/namespaces/proxy-6309/pods/proxy-service-4g4cg-fxmg6:1080/proxy/: test<... (200; 5.516918ms)
Jan 28 22:40:28.355: INFO: (13) /api/v1/namespaces/proxy-6309/pods/http:proxy-service-4g4cg-fxmg6:160/proxy/: foo (200; 5.838342ms)
Jan 28 22:40:28.355: INFO: (13) /api/v1/namespaces/proxy-6309/pods/https:proxy-service-4g4cg-fxmg6:460/proxy/: tls baz (200; 5.781469ms)
Jan 28 22:40:28.355: INFO: (13) /api/v1/namespaces/proxy-6309/pods/https:proxy-service-4g4cg-fxmg6:462/proxy/: tls qux (200; 5.931848ms)
Jan 28 22:40:28.356: INFO: (13) /api/v1/namespaces/proxy-6309/pods/http:proxy-service-4g4cg-fxmg6:1080/proxy/: ... (200; 6.377151ms)
Jan 28 22:40:28.356: INFO: (13) /api/v1/namespaces/proxy-6309/pods/proxy-service-4g4cg-fxmg6:162/proxy/: bar (200; 6.450987ms)
Jan 28 22:40:28.356: INFO: (13) /api/v1/namespaces/proxy-6309/pods/proxy-service-4g4cg-fxmg6:160/proxy/: foo (200; 6.484009ms)
Jan 28 22:40:28.356: INFO: (13) /api/v1/namespaces/proxy-6309/pods/proxy-service-4g4cg-fxmg6/proxy/: test (200; 6.55077ms)
Jan 28 22:40:28.356: INFO: (13) /api/v1/namespaces/proxy-6309/pods/https:proxy-service-4g4cg-fxmg6:443/proxy/: test (200; 7.984135ms)
Jan 28 22:40:28.368: INFO: (14) /api/v1/namespaces/proxy-6309/pods/proxy-service-4g4cg-fxmg6:1080/proxy/: test<... (200; 7.936612ms)
Jan 28 22:40:28.369: INFO: (14) /api/v1/namespaces/proxy-6309/pods/proxy-service-4g4cg-fxmg6:162/proxy/: bar (200; 8.774499ms)
Jan 28 22:40:28.369: INFO: (14) /api/v1/namespaces/proxy-6309/pods/http:proxy-service-4g4cg-fxmg6:160/proxy/: foo (200; 8.667356ms)
Jan 28 22:40:28.370: INFO: (14) /api/v1/namespaces/proxy-6309/pods/https:proxy-service-4g4cg-fxmg6:460/proxy/: tls baz (200; 9.323029ms)
Jan 28 22:40:28.370: INFO: (14) /api/v1/namespaces/proxy-6309/pods/http:proxy-service-4g4cg-fxmg6:1080/proxy/: ... (200; 9.429781ms)
Jan 28 22:40:28.370: INFO: (14) /api/v1/namespaces/proxy-6309/pods/https:proxy-service-4g4cg-fxmg6:462/proxy/: tls qux (200; 9.281561ms)
Jan 28 22:40:28.370: INFO: (14) /api/v1/namespaces/proxy-6309/pods/https:proxy-service-4g4cg-fxmg6:443/proxy/: test (200; 10.012885ms)
Jan 28 22:40:28.384: INFO: (15) /api/v1/namespaces/proxy-6309/pods/https:proxy-service-4g4cg-fxmg6:443/proxy/: test<... (200; 10.279224ms)
Jan 28 22:40:28.384: INFO: (15) /api/v1/namespaces/proxy-6309/pods/https:proxy-service-4g4cg-fxmg6:462/proxy/: tls qux (200; 10.345048ms)
Jan 28 22:40:28.384: INFO: (15) /api/v1/namespaces/proxy-6309/pods/https:proxy-service-4g4cg-fxmg6:460/proxy/: tls baz (200; 10.560211ms)
Jan 28 22:40:28.384: INFO: (15) /api/v1/namespaces/proxy-6309/pods/http:proxy-service-4g4cg-fxmg6:160/proxy/: foo (200; 10.466995ms)
Jan 28 22:40:28.384: INFO: (15) /api/v1/namespaces/proxy-6309/pods/http:proxy-service-4g4cg-fxmg6:1080/proxy/: ... (200; 10.751496ms)
Jan 28 22:40:28.386: INFO: (15) /api/v1/namespaces/proxy-6309/services/proxy-service-4g4cg:portname2/proxy/: bar (200; 12.141883ms)
Jan 28 22:40:28.386: INFO: (15) /api/v1/namespaces/proxy-6309/services/https:proxy-service-4g4cg:tlsportname2/proxy/: tls qux (200; 12.628297ms)
Jan 28 22:40:28.386: INFO: (15) /api/v1/namespaces/proxy-6309/services/proxy-service-4g4cg:portname1/proxy/: foo (200; 12.565754ms)
Jan 28 22:40:28.386: INFO: (15) /api/v1/namespaces/proxy-6309/services/http:proxy-service-4g4cg:portname2/proxy/: bar (200; 12.97616ms)
Jan 28 22:40:28.386: INFO: (15) /api/v1/namespaces/proxy-6309/services/https:proxy-service-4g4cg:tlsportname1/proxy/: tls baz (200; 12.859938ms)
Jan 28 22:40:28.396: INFO: (16) /api/v1/namespaces/proxy-6309/pods/https:proxy-service-4g4cg-fxmg6:462/proxy/: tls qux (200; 9.041476ms)
Jan 28 22:40:28.396: INFO: (16) /api/v1/namespaces/proxy-6309/pods/http:proxy-service-4g4cg-fxmg6:162/proxy/: bar (200; 9.004151ms)
Jan 28 22:40:28.396: INFO: (16) /api/v1/namespaces/proxy-6309/pods/proxy-service-4g4cg-fxmg6:1080/proxy/: test<... (200; 9.211667ms)
Jan 28 22:40:28.396: INFO: (16) /api/v1/namespaces/proxy-6309/pods/https:proxy-service-4g4cg-fxmg6:460/proxy/: tls baz (200; 9.403499ms)
Jan 28 22:40:28.396: INFO: (16) /api/v1/namespaces/proxy-6309/pods/http:proxy-service-4g4cg-fxmg6:1080/proxy/: ... (200; 9.556326ms)
Jan 28 22:40:28.396: INFO: (16) /api/v1/namespaces/proxy-6309/pods/https:proxy-service-4g4cg-fxmg6:443/proxy/: test (200; 9.477379ms)
Jan 28 22:40:28.396: INFO: (16) /api/v1/namespaces/proxy-6309/pods/proxy-service-4g4cg-fxmg6:160/proxy/: foo (200; 9.590662ms)
Jan 28 22:40:28.396: INFO: (16) /api/v1/namespaces/proxy-6309/pods/proxy-service-4g4cg-fxmg6:162/proxy/: bar (200; 9.656924ms)
Jan 28 22:40:28.398: INFO: (16) /api/v1/namespaces/proxy-6309/services/http:proxy-service-4g4cg:portname1/proxy/: foo (200; 11.45273ms)
Jan 28 22:40:28.399: INFO: (16) /api/v1/namespaces/proxy-6309/services/https:proxy-service-4g4cg:tlsportname1/proxy/: tls baz (200; 12.511353ms)
Jan 28 22:40:28.399: INFO: (16) /api/v1/namespaces/proxy-6309/services/proxy-service-4g4cg:portname1/proxy/: foo (200; 12.196584ms)
Jan 28 22:40:28.399: INFO: (16) /api/v1/namespaces/proxy-6309/services/proxy-service-4g4cg:portname2/proxy/: bar (200; 12.325487ms)
Jan 28 22:40:28.399: INFO: (16) /api/v1/namespaces/proxy-6309/services/https:proxy-service-4g4cg:tlsportname2/proxy/: tls qux (200; 12.322046ms)
Jan 28 22:40:28.399: INFO: (16) /api/v1/namespaces/proxy-6309/services/http:proxy-service-4g4cg:portname2/proxy/: bar (200; 12.740694ms)
Jan 28 22:40:28.406: INFO: (17) /api/v1/namespaces/proxy-6309/pods/proxy-service-4g4cg-fxmg6:162/proxy/: bar (200; 6.216216ms)
Jan 28 22:40:28.406: INFO: (17) /api/v1/namespaces/proxy-6309/pods/https:proxy-service-4g4cg-fxmg6:443/proxy/: test (200; 6.638226ms)
Jan 28 22:40:28.408: INFO: (17) /api/v1/namespaces/proxy-6309/services/https:proxy-service-4g4cg:tlsportname2/proxy/: tls qux (200; 7.935411ms)
Jan 28 22:40:28.408: INFO: (17) /api/v1/namespaces/proxy-6309/pods/https:proxy-service-4g4cg-fxmg6:462/proxy/: tls qux (200; 8.042914ms)
Jan 28 22:40:28.408: INFO: (17) /api/v1/namespaces/proxy-6309/pods/proxy-service-4g4cg-fxmg6:1080/proxy/: test<... (200; 8.104932ms)
Jan 28 22:40:28.408: INFO: (17) /api/v1/namespaces/proxy-6309/services/proxy-service-4g4cg:portname2/proxy/: bar (200; 8.220323ms)
Jan 28 22:40:28.408: INFO: (17) /api/v1/namespaces/proxy-6309/pods/https:proxy-service-4g4cg-fxmg6:460/proxy/: tls baz (200; 8.372433ms)
Jan 28 22:40:28.408: INFO: (17) /api/v1/namespaces/proxy-6309/services/https:proxy-service-4g4cg:tlsportname1/proxy/: tls baz (200; 8.229826ms)
Jan 28 22:40:28.408: INFO: (17) /api/v1/namespaces/proxy-6309/services/proxy-service-4g4cg:portname1/proxy/: foo (200; 8.435486ms)
Jan 28 22:40:28.408: INFO: (17) /api/v1/namespaces/proxy-6309/pods/http:proxy-service-4g4cg-fxmg6:162/proxy/: bar (200; 8.324855ms)
Jan 28 22:40:28.408: INFO: (17) /api/v1/namespaces/proxy-6309/services/http:proxy-service-4g4cg:portname1/proxy/: foo (200; 8.221827ms)
Jan 28 22:40:28.408: INFO: (17) /api/v1/namespaces/proxy-6309/pods/proxy-service-4g4cg-fxmg6:160/proxy/: foo (200; 8.56434ms)
Jan 28 22:40:28.409: INFO: (17) /api/v1/namespaces/proxy-6309/pods/http:proxy-service-4g4cg-fxmg6:160/proxy/: foo (200; 9.039389ms)
Jan 28 22:40:28.409: INFO: (17) /api/v1/namespaces/proxy-6309/services/http:proxy-service-4g4cg:portname2/proxy/: bar (200; 9.25858ms)
Jan 28 22:40:28.409: INFO: (17) /api/v1/namespaces/proxy-6309/pods/http:proxy-service-4g4cg-fxmg6:1080/proxy/: ... (200; 9.349222ms)
Jan 28 22:40:28.416: INFO: (18) /api/v1/namespaces/proxy-6309/pods/http:proxy-service-4g4cg-fxmg6:160/proxy/: foo (200; 6.22254ms)
Jan 28 22:40:28.416: INFO: (18) /api/v1/namespaces/proxy-6309/pods/proxy-service-4g4cg-fxmg6:1080/proxy/: test<... (200; 6.933842ms)
Jan 28 22:40:28.416: INFO: (18) /api/v1/namespaces/proxy-6309/services/https:proxy-service-4g4cg:tlsportname1/proxy/: tls baz (200; 6.971444ms)
Jan 28 22:40:28.417: INFO: (18) /api/v1/namespaces/proxy-6309/pods/http:proxy-service-4g4cg-fxmg6:1080/proxy/: ... (200; 6.862335ms)
Jan 28 22:40:28.418: INFO: (18) /api/v1/namespaces/proxy-6309/pods/https:proxy-service-4g4cg-fxmg6:443/proxy/: test (200; 10.074043ms)
Jan 28 22:40:28.420: INFO: (18) /api/v1/namespaces/proxy-6309/pods/proxy-service-4g4cg-fxmg6:160/proxy/: foo (200; 10.056327ms)
Jan 28 22:40:28.420: INFO: (18) /api/v1/namespaces/proxy-6309/services/proxy-service-4g4cg:portname1/proxy/: foo (200; 10.099151ms)
Jan 28 22:40:28.420: INFO: (18) /api/v1/namespaces/proxy-6309/services/http:proxy-service-4g4cg:portname2/proxy/: bar (200; 10.309358ms)
Jan 28 22:40:28.421: INFO: (18) /api/v1/namespaces/proxy-6309/services/http:proxy-service-4g4cg:portname1/proxy/: foo (200; 11.247844ms)
Jan 28 22:40:28.421: INFO: (18) /api/v1/namespaces/proxy-6309/services/proxy-service-4g4cg:portname2/proxy/: bar (200; 11.513745ms)
Jan 28 22:40:28.421: INFO: (18) /api/v1/namespaces/proxy-6309/services/https:proxy-service-4g4cg:tlsportname2/proxy/: tls qux (200; 11.629711ms)
Jan 28 22:40:28.428: INFO: (19) /api/v1/namespaces/proxy-6309/pods/proxy-service-4g4cg-fxmg6:1080/proxy/: test<... (200; 7.12564ms)
Jan 28 22:40:28.429: INFO: (19) /api/v1/namespaces/proxy-6309/pods/http:proxy-service-4g4cg-fxmg6:162/proxy/: bar (200; 7.303256ms)
Jan 28 22:40:28.429: INFO: (19) /api/v1/namespaces/proxy-6309/pods/http:proxy-service-4g4cg-fxmg6:1080/proxy/: ... (200; 7.458096ms)
Jan 28 22:40:28.429: INFO: (19) /api/v1/namespaces/proxy-6309/pods/proxy-service-4g4cg-fxmg6/proxy/: test (200; 7.368112ms)
Jan 28 22:40:28.429: INFO: (19) /api/v1/namespaces/proxy-6309/pods/https:proxy-service-4g4cg-fxmg6:460/proxy/: tls baz (200; 7.416812ms)
Jan 28 22:40:28.429: INFO: (19) /api/v1/namespaces/proxy-6309/pods/proxy-service-4g4cg-fxmg6:160/proxy/: foo (200; 7.855861ms)
Jan 28 22:40:28.429: INFO: (19) /api/v1/namespaces/proxy-6309/pods/proxy-service-4g4cg-fxmg6:162/proxy/: bar (200; 7.850744ms)
Jan 28 22:40:28.429: INFO: (19) /api/v1/namespaces/proxy-6309/pods/https:proxy-service-4g4cg-fxmg6:462/proxy/: tls qux (200; 7.955752ms)
Jan 28 22:40:28.430: INFO: (19) /api/v1/namespaces/proxy-6309/pods/http:proxy-service-4g4cg-fxmg6:160/proxy/: foo (200; 8.229625ms)
Jan 28 22:40:28.430: INFO: (19) /api/v1/namespaces/proxy-6309/pods/https:proxy-service-4g4cg-fxmg6:443/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod busybox-d7bea327-dc5c-4f2f-a71d-e631197792cf in namespace container-probe-5140
Jan 28 22:40:52.702: INFO: Started pod busybox-d7bea327-dc5c-4f2f-a71d-e631197792cf in namespace container-probe-5140
STEP: checking the pod's current state and verifying that restartCount is present
Jan 28 22:40:52.707: INFO: Initial restart count of pod busybox-d7bea327-dc5c-4f2f-a71d-e631197792cf is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:44:53.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5140" for this suite.

• [SLOW TEST:251.574 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":219,"skipped":3531,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:44:54.035: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-e175623d-02cd-4ee9-a887-811d3bea024e
STEP: Creating a pod to test consume secrets
Jan 28 22:44:54.250: INFO: Waiting up to 5m0s for pod "pod-secrets-1b2fd4b9-58ba-4d7a-831b-0f925f27bf05" in namespace "secrets-2307" to be "success or failure"
Jan 28 22:44:54.305: INFO: Pod "pod-secrets-1b2fd4b9-58ba-4d7a-831b-0f925f27bf05": Phase="Pending", Reason="", readiness=false. Elapsed: 55.166401ms
Jan 28 22:44:56.321: INFO: Pod "pod-secrets-1b2fd4b9-58ba-4d7a-831b-0f925f27bf05": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070927418s
Jan 28 22:44:58.328: INFO: Pod "pod-secrets-1b2fd4b9-58ba-4d7a-831b-0f925f27bf05": Phase="Pending", Reason="", readiness=false. Elapsed: 4.078333941s
Jan 28 22:45:00.336: INFO: Pod "pod-secrets-1b2fd4b9-58ba-4d7a-831b-0f925f27bf05": Phase="Pending", Reason="", readiness=false. Elapsed: 6.085961821s
Jan 28 22:45:02.346: INFO: Pod "pod-secrets-1b2fd4b9-58ba-4d7a-831b-0f925f27bf05": Phase="Pending", Reason="", readiness=false. Elapsed: 8.095779726s
Jan 28 22:45:04.353: INFO: Pod "pod-secrets-1b2fd4b9-58ba-4d7a-831b-0f925f27bf05": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.10310788s
STEP: Saw pod success
Jan 28 22:45:04.353: INFO: Pod "pod-secrets-1b2fd4b9-58ba-4d7a-831b-0f925f27bf05" satisfied condition "success or failure"
Jan 28 22:45:04.358: INFO: Trying to get logs from node jerma-node pod pod-secrets-1b2fd4b9-58ba-4d7a-831b-0f925f27bf05 container secret-env-test: 
STEP: delete the pod
Jan 28 22:45:04.406: INFO: Waiting for pod pod-secrets-1b2fd4b9-58ba-4d7a-831b-0f925f27bf05 to disappear
Jan 28 22:45:04.415: INFO: Pod pod-secrets-1b2fd4b9-58ba-4d7a-831b-0f925f27bf05 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:45:04.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2307" for this suite.

• [SLOW TEST:10.403 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":220,"skipped":3562,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:45:04.441: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Jan 28 22:45:04.557: INFO: Waiting up to 5m0s for pod "downward-api-97be547a-b687-4aa2-8bc0-d26ae2d6f9eb" in namespace "downward-api-6588" to be "success or failure"
Jan 28 22:45:04.596: INFO: Pod "downward-api-97be547a-b687-4aa2-8bc0-d26ae2d6f9eb": Phase="Pending", Reason="", readiness=false. Elapsed: 39.483315ms
Jan 28 22:45:06.611: INFO: Pod "downward-api-97be547a-b687-4aa2-8bc0-d26ae2d6f9eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054224779s
Jan 28 22:45:08.623: INFO: Pod "downward-api-97be547a-b687-4aa2-8bc0-d26ae2d6f9eb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066483955s
Jan 28 22:45:10.629: INFO: Pod "downward-api-97be547a-b687-4aa2-8bc0-d26ae2d6f9eb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.071931385s
Jan 28 22:45:12.685: INFO: Pod "downward-api-97be547a-b687-4aa2-8bc0-d26ae2d6f9eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.128490723s
STEP: Saw pod success
Jan 28 22:45:12.686: INFO: Pod "downward-api-97be547a-b687-4aa2-8bc0-d26ae2d6f9eb" satisfied condition "success or failure"
Jan 28 22:45:12.689: INFO: Trying to get logs from node jerma-node pod downward-api-97be547a-b687-4aa2-8bc0-d26ae2d6f9eb container dapi-container: 
STEP: delete the pod
Jan 28 22:45:12.745: INFO: Waiting for pod downward-api-97be547a-b687-4aa2-8bc0-d26ae2d6f9eb to disappear
Jan 28 22:45:12.770: INFO: Pod downward-api-97be547a-b687-4aa2-8bc0-d26ae2d6f9eb no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:45:12.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6588" for this suite.

• [SLOW TEST:8.342 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":221,"skipped":3603,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:45:12.784: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod liveness-b7b96145-8cff-437b-a314-15186830824f in namespace container-probe-2266
Jan 28 22:45:20.974: INFO: Started pod liveness-b7b96145-8cff-437b-a314-15186830824f in namespace container-probe-2266
STEP: checking the pod's current state and verifying that restartCount is present
Jan 28 22:45:20.978: INFO: Initial restart count of pod liveness-b7b96145-8cff-437b-a314-15186830824f is 0
Jan 28 22:45:39.043: INFO: Restart count of pod container-probe-2266/liveness-b7b96145-8cff-437b-a314-15186830824f is now 1 (18.064625812s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:45:39.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2266" for this suite.

• [SLOW TEST:26.310 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":222,"skipped":3617,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:45:39.096: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation
Jan 28 22:45:39.217: INFO: >>> kubeConfig: /root/.kube/config
Jan 28 22:45:42.336: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:45:54.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-4881" for this suite.

• [SLOW TEST:15.077 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":223,"skipped":3636,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:45:54.174: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 28 22:45:54.880: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 28 22:45:56.905: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715848354, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715848354, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715848355, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715848354, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 22:45:58.916: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715848354, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715848354, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715848355, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715848354, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 22:46:00.912: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715848354, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715848354, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715848355, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715848354, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 28 22:46:03.967: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that should be mutated
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that should not be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:46:04.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5101" for this suite.
STEP: Destroying namespace "webhook-5101-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:10.926 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":224,"skipped":3641,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:46:05.101: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan 28 22:46:05.249: INFO: Waiting up to 5m0s for pod "pod-fe3a4ad5-d9f2-47c5-93f0-bae5136a5dfc" in namespace "emptydir-1801" to be "success or failure"
Jan 28 22:46:05.261: INFO: Pod "pod-fe3a4ad5-d9f2-47c5-93f0-bae5136a5dfc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.953886ms
Jan 28 22:46:07.268: INFO: Pod "pod-fe3a4ad5-d9f2-47c5-93f0-bae5136a5dfc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01851241s
Jan 28 22:46:09.276: INFO: Pod "pod-fe3a4ad5-d9f2-47c5-93f0-bae5136a5dfc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026428909s
Jan 28 22:46:11.285: INFO: Pod "pod-fe3a4ad5-d9f2-47c5-93f0-bae5136a5dfc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034763662s
Jan 28 22:46:13.292: INFO: Pod "pod-fe3a4ad5-d9f2-47c5-93f0-bae5136a5dfc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.041720806s
Jan 28 22:46:15.299: INFO: Pod "pod-fe3a4ad5-d9f2-47c5-93f0-bae5136a5dfc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.04964015s
STEP: Saw pod success
Jan 28 22:46:15.300: INFO: Pod "pod-fe3a4ad5-d9f2-47c5-93f0-bae5136a5dfc" satisfied condition "success or failure"
Jan 28 22:46:15.305: INFO: Trying to get logs from node jerma-node pod pod-fe3a4ad5-d9f2-47c5-93f0-bae5136a5dfc container test-container: 
STEP: delete the pod
Jan 28 22:46:15.370: INFO: Waiting for pod pod-fe3a4ad5-d9f2-47c5-93f0-bae5136a5dfc to disappear
Jan 28 22:46:15.405: INFO: Pod pod-fe3a4ad5-d9f2-47c5-93f0-bae5136a5dfc no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:46:15.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1801" for this suite.

• [SLOW TEST:10.336 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":225,"skipped":3652,"failed":0}
S
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:46:15.438: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 28 22:46:15.503: INFO: Creating ReplicaSet my-hostname-basic-631f8a80-67c7-4ca9-b4a6-2c8021368086
Jan 28 22:46:15.575: INFO: Pod name my-hostname-basic-631f8a80-67c7-4ca9-b4a6-2c8021368086: Found 0 pods out of 1
Jan 28 22:46:20.589: INFO: Pod name my-hostname-basic-631f8a80-67c7-4ca9-b4a6-2c8021368086: Found 1 pods out of 1
Jan 28 22:46:20.589: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-631f8a80-67c7-4ca9-b4a6-2c8021368086" is running
Jan 28 22:46:24.681: INFO: Pod "my-hostname-basic-631f8a80-67c7-4ca9-b4a6-2c8021368086-s4zrb" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-28 22:46:15 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-28 22:46:15 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-631f8a80-67c7-4ca9-b4a6-2c8021368086]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-28 22:46:15 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-631f8a80-67c7-4ca9-b4a6-2c8021368086]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-28 22:46:15 +0000 UTC Reason: Message:}])
Jan 28 22:46:24.681: INFO: Trying to dial the pod
Jan 28 22:46:29.724: INFO: Controller my-hostname-basic-631f8a80-67c7-4ca9-b4a6-2c8021368086: Got expected result from replica 1 [my-hostname-basic-631f8a80-67c7-4ca9-b4a6-2c8021368086-s4zrb]: "my-hostname-basic-631f8a80-67c7-4ca9-b4a6-2c8021368086-s4zrb", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:46:29.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-3534" for this suite.

• [SLOW TEST:14.312 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":278,"completed":226,"skipped":3653,"failed":0}
SSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:46:29.750: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:46:38.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-3953" for this suite.

• [SLOW TEST:8.300 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":227,"skipped":3657,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:46:38.053: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Jan 28 22:46:50.793: INFO: Successfully updated pod "annotationupdate5ce1e502-8354-4aac-ace1-5c2bac815e9c"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:46:52.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4862" for this suite.

• [SLOW TEST:14.839 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":228,"skipped":3683,"failed":0}
SSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:46:52.894: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test env composition
Jan 28 22:46:53.040: INFO: Waiting up to 5m0s for pod "var-expansion-c3992e22-8658-47fb-9782-65f478f4f103" in namespace "var-expansion-7659" to be "success or failure"
Jan 28 22:46:53.046: INFO: Pod "var-expansion-c3992e22-8658-47fb-9782-65f478f4f103": Phase="Pending", Reason="", readiness=false. Elapsed: 5.849953ms
Jan 28 22:46:55.053: INFO: Pod "var-expansion-c3992e22-8658-47fb-9782-65f478f4f103": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012606839s
Jan 28 22:46:57.064: INFO: Pod "var-expansion-c3992e22-8658-47fb-9782-65f478f4f103": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023818952s
Jan 28 22:46:59.071: INFO: Pod "var-expansion-c3992e22-8658-47fb-9782-65f478f4f103": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030761247s
Jan 28 22:47:01.077: INFO: Pod "var-expansion-c3992e22-8658-47fb-9782-65f478f4f103": Phase="Pending", Reason="", readiness=false. Elapsed: 8.036572291s
Jan 28 22:47:03.083: INFO: Pod "var-expansion-c3992e22-8658-47fb-9782-65f478f4f103": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.042790387s
STEP: Saw pod success
Jan 28 22:47:03.083: INFO: Pod "var-expansion-c3992e22-8658-47fb-9782-65f478f4f103" satisfied condition "success or failure"
Jan 28 22:47:03.088: INFO: Trying to get logs from node jerma-node pod var-expansion-c3992e22-8658-47fb-9782-65f478f4f103 container dapi-container: 
STEP: delete the pod
Jan 28 22:47:03.274: INFO: Waiting for pod var-expansion-c3992e22-8658-47fb-9782-65f478f4f103 to disappear
Jan 28 22:47:03.289: INFO: Pod var-expansion-c3992e22-8658-47fb-9782-65f478f4f103 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:47:03.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-7659" for this suite.

• [SLOW TEST:10.408 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":229,"skipped":3696,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:47:03.306: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan 28 22:47:03.421: INFO: Waiting up to 5m0s for pod "pod-0db6f9ef-d38d-4dc9-b20b-01cf70239e18" in namespace "emptydir-9461" to be "success or failure"
Jan 28 22:47:03.489: INFO: Pod "pod-0db6f9ef-d38d-4dc9-b20b-01cf70239e18": Phase="Pending", Reason="", readiness=false. Elapsed: 67.697051ms
Jan 28 22:47:05.500: INFO: Pod "pod-0db6f9ef-d38d-4dc9-b20b-01cf70239e18": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078907088s
Jan 28 22:47:07.507: INFO: Pod "pod-0db6f9ef-d38d-4dc9-b20b-01cf70239e18": Phase="Pending", Reason="", readiness=false. Elapsed: 4.085831739s
Jan 28 22:47:09.516: INFO: Pod "pod-0db6f9ef-d38d-4dc9-b20b-01cf70239e18": Phase="Pending", Reason="", readiness=false. Elapsed: 6.094680645s
Jan 28 22:47:11.526: INFO: Pod "pod-0db6f9ef-d38d-4dc9-b20b-01cf70239e18": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.10528672s
STEP: Saw pod success
Jan 28 22:47:11.527: INFO: Pod "pod-0db6f9ef-d38d-4dc9-b20b-01cf70239e18" satisfied condition "success or failure"
Jan 28 22:47:11.531: INFO: Trying to get logs from node jerma-node pod pod-0db6f9ef-d38d-4dc9-b20b-01cf70239e18 container test-container: 
STEP: delete the pod
Jan 28 22:47:11.601: INFO: Waiting for pod pod-0db6f9ef-d38d-4dc9-b20b-01cf70239e18 to disappear
Jan 28 22:47:11.670: INFO: Pod pod-0db6f9ef-d38d-4dc9-b20b-01cf70239e18 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:47:11.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9461" for this suite.

• [SLOW TEST:8.382 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":230,"skipped":3769,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:47:11.689: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:47:11.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3978" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143
•{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":278,"completed":231,"skipped":3780,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:47:11.967: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 28 22:47:12.157: INFO: (0) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 30.939093ms)
Jan 28 22:47:12.318: INFO: (1) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 160.313062ms)
Jan 28 22:47:12.325: INFO: (2) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 6.489729ms)
Jan 28 22:47:12.329: INFO: (3) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 3.743792ms)
Jan 28 22:47:12.344: INFO: (4) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 15.114906ms)
Jan 28 22:47:12.358: INFO: (5) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 13.746561ms)
Jan 28 22:47:12.365: INFO: (6) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 6.707119ms)
Jan 28 22:47:12.373: INFO: (7) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 7.471527ms)
Jan 28 22:47:12.378: INFO: (8) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 5.446661ms)
Jan 28 22:47:12.382: INFO: (9) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 3.899705ms)
Jan 28 22:47:12.385: INFO: (10) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 2.877386ms)
Jan 28 22:47:12.389: INFO: (11) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 3.491989ms)
Jan 28 22:47:12.396: INFO: (12) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 7.151405ms)
Jan 28 22:47:12.399: INFO: (13) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 3.064398ms)
Jan 28 22:47:12.402: INFO: (14) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 3.320341ms)
Jan 28 22:47:12.406: INFO: (15) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 3.414768ms)
Jan 28 22:47:12.411: INFO: (16) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 5.174181ms)
Jan 28 22:47:12.414: INFO: (17) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 2.839324ms)
Jan 28 22:47:12.493: INFO: (18) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 78.973508ms)
Jan 28 22:47:12.502: INFO: (19) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 9.267782ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:47:12.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-903" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource  [Conformance]","total":278,"completed":232,"skipped":3797,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:47:12.517: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 28 22:47:12.691: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:47:14.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-5644" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":278,"completed":233,"skipped":3804,"failed":0}
SSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:47:14.042: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan 28 22:47:22.305: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:47:22.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-4270" for this suite.

• [SLOW TEST:8.342 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":234,"skipped":3810,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:47:22.385: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-map-f1878dc0-3804-4aa0-b74d-97c26975b28b
STEP: Creating a pod to test consume configMaps
Jan 28 22:47:22.544: INFO: Waiting up to 5m0s for pod "pod-configmaps-a3aaf7f8-aca0-481b-a8b6-76a4036c631a" in namespace "configmap-6641" to be "success or failure"
Jan 28 22:47:22.584: INFO: Pod "pod-configmaps-a3aaf7f8-aca0-481b-a8b6-76a4036c631a": Phase="Pending", Reason="", readiness=false. Elapsed: 40.344959ms
Jan 28 22:47:24.595: INFO: Pod "pod-configmaps-a3aaf7f8-aca0-481b-a8b6-76a4036c631a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050982532s
Jan 28 22:47:26.601: INFO: Pod "pod-configmaps-a3aaf7f8-aca0-481b-a8b6-76a4036c631a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057252846s
Jan 28 22:47:28.618: INFO: Pod "pod-configmaps-a3aaf7f8-aca0-481b-a8b6-76a4036c631a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.073988044s
Jan 28 22:47:30.629: INFO: Pod "pod-configmaps-a3aaf7f8-aca0-481b-a8b6-76a4036c631a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.084960679s
STEP: Saw pod success
Jan 28 22:47:30.629: INFO: Pod "pod-configmaps-a3aaf7f8-aca0-481b-a8b6-76a4036c631a" satisfied condition "success or failure"
Jan 28 22:47:30.634: INFO: Trying to get logs from node jerma-node pod pod-configmaps-a3aaf7f8-aca0-481b-a8b6-76a4036c631a container configmap-volume-test: 
STEP: delete the pod
Jan 28 22:47:30.989: INFO: Waiting for pod pod-configmaps-a3aaf7f8-aca0-481b-a8b6-76a4036c631a to disappear
Jan 28 22:47:31.076: INFO: Pod pod-configmaps-a3aaf7f8-aca0-481b-a8b6-76a4036c631a no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:47:31.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6641" for this suite.

• [SLOW TEST:8.703 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":235,"skipped":3821,"failed":0}
SSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:47:31.089: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Jan 28 22:47:31.183: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:47:45.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-2423" for this suite.

• [SLOW TEST:14.097 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":236,"skipped":3824,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:47:45.188: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 28 22:47:45.418: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a484e552-9486-492e-b3fb-8f81f398d1ca" in namespace "downward-api-4505" to be "success or failure"
Jan 28 22:47:45.424: INFO: Pod "downwardapi-volume-a484e552-9486-492e-b3fb-8f81f398d1ca": Phase="Pending", Reason="", readiness=false. Elapsed: 6.499826ms
Jan 28 22:47:47.438: INFO: Pod "downwardapi-volume-a484e552-9486-492e-b3fb-8f81f398d1ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020316043s
Jan 28 22:47:49.450: INFO: Pod "downwardapi-volume-a484e552-9486-492e-b3fb-8f81f398d1ca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031586221s
Jan 28 22:47:51.497: INFO: Pod "downwardapi-volume-a484e552-9486-492e-b3fb-8f81f398d1ca": Phase="Pending", Reason="", readiness=false. Elapsed: 6.079062763s
Jan 28 22:47:53.504: INFO: Pod "downwardapi-volume-a484e552-9486-492e-b3fb-8f81f398d1ca": Phase="Pending", Reason="", readiness=false. Elapsed: 8.086101021s
Jan 28 22:47:55.515: INFO: Pod "downwardapi-volume-a484e552-9486-492e-b3fb-8f81f398d1ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.096749368s
STEP: Saw pod success
Jan 28 22:47:55.515: INFO: Pod "downwardapi-volume-a484e552-9486-492e-b3fb-8f81f398d1ca" satisfied condition "success or failure"
Jan 28 22:47:55.519: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-a484e552-9486-492e-b3fb-8f81f398d1ca container client-container: 
STEP: delete the pod
Jan 28 22:47:55.571: INFO: Waiting for pod downwardapi-volume-a484e552-9486-492e-b3fb-8f81f398d1ca to disappear
Jan 28 22:47:55.576: INFO: Pod downwardapi-volume-a484e552-9486-492e-b3fb-8f81f398d1ca no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:47:55.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4505" for this suite.

• [SLOW TEST:10.404 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":237,"skipped":3878,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:47:55.594: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 28 22:47:55.740: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Jan 28 22:47:57.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6023 create -f -'
Jan 28 22:48:00.002: INFO: stderr: ""
Jan 28 22:48:00.003: INFO: stdout: "e2e-test-crd-publish-openapi-5452-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Jan 28 22:48:00.003: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6023 delete e2e-test-crd-publish-openapi-5452-crds test-cr'
Jan 28 22:48:00.155: INFO: stderr: ""
Jan 28 22:48:00.155: INFO: stdout: "e2e-test-crd-publish-openapi-5452-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
Jan 28 22:48:00.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6023 apply -f -'
Jan 28 22:48:00.678: INFO: stderr: ""
Jan 28 22:48:00.678: INFO: stdout: "e2e-test-crd-publish-openapi-5452-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Jan 28 22:48:00.678: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6023 delete e2e-test-crd-publish-openapi-5452-crds test-cr'
Jan 28 22:48:00.806: INFO: stderr: ""
Jan 28 22:48:00.806: INFO: stdout: "e2e-test-crd-publish-openapi-5452-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Jan 28 22:48:00.806: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5452-crds'
Jan 28 22:48:01.214: INFO: stderr: ""
Jan 28 22:48:01.214: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-5452-crd\nVERSION:  crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n     preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Waldo\n\n   status\t\n     Status of Waldo\n\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:48:04.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-6023" for this suite.

• [SLOW TEST:9.170 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":238,"skipped":3902,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:48:04.764: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ConfigMap
STEP: Ensuring resource quota status captures configMap creation
STEP: Deleting a ConfigMap
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:48:20.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-3667" for this suite.

• [SLOW TEST:16.168 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":239,"skipped":3920,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:48:20.933: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-projected-kwwg
STEP: Creating a pod to test atomic-volume-subpath
Jan 28 22:48:21.084: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-kwwg" in namespace "subpath-8920" to be "success or failure"
Jan 28 22:48:21.091: INFO: Pod "pod-subpath-test-projected-kwwg": Phase="Pending", Reason="", readiness=false. Elapsed: 7.095481ms
Jan 28 22:48:23.100: INFO: Pod "pod-subpath-test-projected-kwwg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016038567s
Jan 28 22:48:25.106: INFO: Pod "pod-subpath-test-projected-kwwg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021615456s
Jan 28 22:48:27.112: INFO: Pod "pod-subpath-test-projected-kwwg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.028022429s
Jan 28 22:48:29.117: INFO: Pod "pod-subpath-test-projected-kwwg": Phase="Running", Reason="", readiness=true. Elapsed: 8.03256692s
Jan 28 22:48:31.124: INFO: Pod "pod-subpath-test-projected-kwwg": Phase="Running", Reason="", readiness=true. Elapsed: 10.03923256s
Jan 28 22:48:33.132: INFO: Pod "pod-subpath-test-projected-kwwg": Phase="Running", Reason="", readiness=true. Elapsed: 12.048212846s
Jan 28 22:48:35.137: INFO: Pod "pod-subpath-test-projected-kwwg": Phase="Running", Reason="", readiness=true. Elapsed: 14.052886031s
Jan 28 22:48:37.146: INFO: Pod "pod-subpath-test-projected-kwwg": Phase="Running", Reason="", readiness=true. Elapsed: 16.062035793s
Jan 28 22:48:39.154: INFO: Pod "pod-subpath-test-projected-kwwg": Phase="Running", Reason="", readiness=true. Elapsed: 18.07016444s
Jan 28 22:48:41.163: INFO: Pod "pod-subpath-test-projected-kwwg": Phase="Running", Reason="", readiness=true. Elapsed: 20.079028348s
Jan 28 22:48:43.170: INFO: Pod "pod-subpath-test-projected-kwwg": Phase="Running", Reason="", readiness=true. Elapsed: 22.085944345s
Jan 28 22:48:45.177: INFO: Pod "pod-subpath-test-projected-kwwg": Phase="Running", Reason="", readiness=true. Elapsed: 24.092769783s
Jan 28 22:48:47.188: INFO: Pod "pod-subpath-test-projected-kwwg": Phase="Running", Reason="", readiness=true. Elapsed: 26.103426743s
Jan 28 22:48:49.195: INFO: Pod "pod-subpath-test-projected-kwwg": Phase="Running", Reason="", readiness=true. Elapsed: 28.110695466s
Jan 28 22:48:51.202: INFO: Pod "pod-subpath-test-projected-kwwg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.117361308s
STEP: Saw pod success
Jan 28 22:48:51.202: INFO: Pod "pod-subpath-test-projected-kwwg" satisfied condition "success or failure"
Jan 28 22:48:51.206: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-projected-kwwg container test-container-subpath-projected-kwwg: 
STEP: delete the pod
Jan 28 22:48:51.251: INFO: Waiting for pod pod-subpath-test-projected-kwwg to disappear
Jan 28 22:48:51.260: INFO: Pod pod-subpath-test-projected-kwwg no longer exists
STEP: Deleting pod pod-subpath-test-projected-kwwg
Jan 28 22:48:51.261: INFO: Deleting pod "pod-subpath-test-projected-kwwg" in namespace "subpath-8920"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:48:51.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-8920" for this suite.

• [SLOW TEST:30.402 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":240,"skipped":3938,"failed":0}
S
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:48:51.337: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0128 22:49:32.035750       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 28 22:49:32.035: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:49:32.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8561" for this suite.

• [SLOW TEST:40.725 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":241,"skipped":3939,"failed":0}
SSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:49:32.062: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-1364
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-1364
STEP: Creating statefulset with conflicting port in namespace statefulset-1364
STEP: Waiting until pod test-pod will start running in namespace statefulset-1364
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-1364
Jan 28 22:49:50.504: INFO: Observed stateful pod in namespace: statefulset-1364, name: ss-0, uid: 17b50b88-e039-4558-9746-c36c277d1221, status phase: Pending. Waiting for statefulset controller to delete.
Jan 28 22:49:52.314: INFO: Observed stateful pod in namespace: statefulset-1364, name: ss-0, uid: 17b50b88-e039-4558-9746-c36c277d1221, status phase: Failed. Waiting for statefulset controller to delete.
Jan 28 22:49:52.371: INFO: Observed stateful pod in namespace: statefulset-1364, name: ss-0, uid: 17b50b88-e039-4558-9746-c36c277d1221, status phase: Failed. Waiting for statefulset controller to delete.
Jan 28 22:49:52.388: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-1364
STEP: Removing pod with conflicting port in namespace statefulset-1364
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-1364 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Jan 28 22:50:00.586: INFO: Deleting all statefulset in ns statefulset-1364
Jan 28 22:50:00.590: INFO: Scaling statefulset ss to 0
Jan 28 22:50:20.637: INFO: Waiting for statefulset status.replicas updated to 0
Jan 28 22:50:20.641: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:50:20.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-1364" for this suite.

• [SLOW TEST:48.614 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":242,"skipped":3947,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:50:20.677: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 28 22:50:20.800: INFO: Waiting up to 5m0s for pod "downwardapi-volume-75cb84f9-184a-4ee3-92b0-79602d6d88cb" in namespace "projected-6997" to be "success or failure"
Jan 28 22:50:20.807: INFO: Pod "downwardapi-volume-75cb84f9-184a-4ee3-92b0-79602d6d88cb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.466365ms
Jan 28 22:50:22.816: INFO: Pod "downwardapi-volume-75cb84f9-184a-4ee3-92b0-79602d6d88cb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015334602s
Jan 28 22:50:24.842: INFO: Pod "downwardapi-volume-75cb84f9-184a-4ee3-92b0-79602d6d88cb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041804075s
Jan 28 22:50:26.848: INFO: Pod "downwardapi-volume-75cb84f9-184a-4ee3-92b0-79602d6d88cb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047761295s
Jan 28 22:50:28.863: INFO: Pod "downwardapi-volume-75cb84f9-184a-4ee3-92b0-79602d6d88cb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.062965357s
STEP: Saw pod success
Jan 28 22:50:28.864: INFO: Pod "downwardapi-volume-75cb84f9-184a-4ee3-92b0-79602d6d88cb" satisfied condition "success or failure"
Jan 28 22:50:28.869: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-75cb84f9-184a-4ee3-92b0-79602d6d88cb container client-container: 
STEP: delete the pod
Jan 28 22:50:28.998: INFO: Waiting for pod downwardapi-volume-75cb84f9-184a-4ee3-92b0-79602d6d88cb to disappear
Jan 28 22:50:29.005: INFO: Pod downwardapi-volume-75cb84f9-184a-4ee3-92b0-79602d6d88cb no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:50:29.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6997" for this suite.

• [SLOW TEST:8.341 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":243,"skipped":3999,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:50:29.019: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: getting the auto-created API token
Jan 28 22:50:29.730: INFO: created pod pod-service-account-defaultsa
Jan 28 22:50:29.730: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Jan 28 22:50:29.750: INFO: created pod pod-service-account-mountsa
Jan 28 22:50:29.750: INFO: pod pod-service-account-mountsa service account token volume mount: true
Jan 28 22:50:29.845: INFO: created pod pod-service-account-nomountsa
Jan 28 22:50:29.845: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Jan 28 22:50:29.906: INFO: created pod pod-service-account-defaultsa-mountspec
Jan 28 22:50:29.907: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Jan 28 22:50:29.916: INFO: created pod pod-service-account-mountsa-mountspec
Jan 28 22:50:29.916: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Jan 28 22:50:29.937: INFO: created pod pod-service-account-nomountsa-mountspec
Jan 28 22:50:29.937: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Jan 28 22:50:30.037: INFO: created pod pod-service-account-defaultsa-nomountspec
Jan 28 22:50:30.037: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Jan 28 22:50:30.113: INFO: created pod pod-service-account-mountsa-nomountspec
Jan 28 22:50:30.113: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Jan 28 22:50:30.235: INFO: created pod pod-service-account-nomountsa-nomountspec
Jan 28 22:50:30.235: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:50:30.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-8675" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":278,"completed":244,"skipped":4023,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:50:31.326: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 28 22:50:33.728: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:50:34.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-6333" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":278,"completed":245,"skipped":4029,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:50:35.149: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-082bd7fb-a92e-4a27-bbc8-4e103718ea51
STEP: Creating a pod to test consume configMaps
Jan 28 22:50:35.766: INFO: Waiting up to 5m0s for pod "pod-configmaps-762e70df-74ae-48e0-889f-d4ccf0d3b5af" in namespace "configmap-9332" to be "success or failure"
Jan 28 22:50:35.819: INFO: Pod "pod-configmaps-762e70df-74ae-48e0-889f-d4ccf0d3b5af": Phase="Pending", Reason="", readiness=false. Elapsed: 52.792381ms
Jan 28 22:50:37.829: INFO: Pod "pod-configmaps-762e70df-74ae-48e0-889f-d4ccf0d3b5af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063104394s
Jan 28 22:50:40.767: INFO: Pod "pod-configmaps-762e70df-74ae-48e0-889f-d4ccf0d3b5af": Phase="Pending", Reason="", readiness=false. Elapsed: 5.000912263s
Jan 28 22:50:42.902: INFO: Pod "pod-configmaps-762e70df-74ae-48e0-889f-d4ccf0d3b5af": Phase="Pending", Reason="", readiness=false. Elapsed: 7.136119404s
Jan 28 22:50:44.914: INFO: Pod "pod-configmaps-762e70df-74ae-48e0-889f-d4ccf0d3b5af": Phase="Pending", Reason="", readiness=false. Elapsed: 9.147280278s
Jan 28 22:50:47.179: INFO: Pod "pod-configmaps-762e70df-74ae-48e0-889f-d4ccf0d3b5af": Phase="Pending", Reason="", readiness=false. Elapsed: 11.412284214s
Jan 28 22:50:49.193: INFO: Pod "pod-configmaps-762e70df-74ae-48e0-889f-d4ccf0d3b5af": Phase="Pending", Reason="", readiness=false. Elapsed: 13.426626155s
Jan 28 22:50:51.201: INFO: Pod "pod-configmaps-762e70df-74ae-48e0-889f-d4ccf0d3b5af": Phase="Pending", Reason="", readiness=false. Elapsed: 15.435012581s
Jan 28 22:50:53.210: INFO: Pod "pod-configmaps-762e70df-74ae-48e0-889f-d4ccf0d3b5af": Phase="Pending", Reason="", readiness=false. Elapsed: 17.443367709s
Jan 28 22:50:55.217: INFO: Pod "pod-configmaps-762e70df-74ae-48e0-889f-d4ccf0d3b5af": Phase="Pending", Reason="", readiness=false. Elapsed: 19.450668733s
Jan 28 22:50:57.225: INFO: Pod "pod-configmaps-762e70df-74ae-48e0-889f-d4ccf0d3b5af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.458757396s
STEP: Saw pod success
Jan 28 22:50:57.225: INFO: Pod "pod-configmaps-762e70df-74ae-48e0-889f-d4ccf0d3b5af" satisfied condition "success or failure"
Jan 28 22:50:57.230: INFO: Trying to get logs from node jerma-node pod pod-configmaps-762e70df-74ae-48e0-889f-d4ccf0d3b5af container configmap-volume-test: 
STEP: delete the pod
Jan 28 22:50:57.324: INFO: Waiting for pod pod-configmaps-762e70df-74ae-48e0-889f-d4ccf0d3b5af to disappear
Jan 28 22:50:57.355: INFO: Pod pod-configmaps-762e70df-74ae-48e0-889f-d4ccf0d3b5af no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:50:57.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9332" for this suite.

• [SLOW TEST:22.232 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":246,"skipped":4059,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:50:57.383: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 28 22:50:58.423: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 28 22:51:00.440: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715848658, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715848658, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715848658, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715848658, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 22:51:02.448: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715848658, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715848658, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715848658, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715848658, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 22:51:04.449: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715848658, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715848658, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715848658, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715848658, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 28 22:51:07.493: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a mutating webhook configuration
STEP: Updating a mutating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that should not be mutated
STEP: Patching a mutating webhook configuration's rules to include the create operation
STEP: Creating a configMap that should be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:51:07.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5091" for this suite.
STEP: Destroying namespace "webhook-5091-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:10.476 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":247,"skipped":4087,"failed":0}
SS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:51:07.860: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 28 22:51:08.644: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 28 22:51:10.656: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715848668, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715848668, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715848668, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715848668, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 22:51:12.676: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715848668, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715848668, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715848668, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715848668, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 22:51:14.665: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715848668, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715848668, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715848668, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715848668, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 22:51:16.672: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715848668, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715848668, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715848668, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715848668, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 28 22:51:20.394: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 28 22:51:20.404: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-1475-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:51:21.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4278" for this suite.
STEP: Destroying namespace "webhook-4278-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:13.553 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":248,"skipped":4089,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:51:21.416: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:51:32.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-873" for this suite.

• [SLOW TEST:10.641 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":249,"skipped":4110,"failed":0}
SSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:51:32.058: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-5983.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-5983.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5983.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-5983.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-5983.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5983.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 28 22:51:42.450: INFO: DNS probes using dns-5983/dns-test-96429820-700e-4ab4-a7d7-80d0a8674bde succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:51:42.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5983" for this suite.

• [SLOW TEST:10.742 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":250,"skipped":4113,"failed":0}
SSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:51:42.801: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan 28 22:51:51.589: INFO: Successfully updated pod "pod-update-74b4a2c8-3a5e-4dd3-b8ff-cf04fe70a0ee"
STEP: verifying the updated pod is in kubernetes
Jan 28 22:51:51.622: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:51:51.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-750" for this suite.

• [SLOW TEST:8.832 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":251,"skipped":4124,"failed":0}
SSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:51:51.634: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service endpoint-test2 in namespace services-636
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-636 to expose endpoints map[]
Jan 28 22:51:51.779: INFO: successfully validated that service endpoint-test2 in namespace services-636 exposes endpoints map[] (6.31735ms elapsed)
STEP: Creating pod pod1 in namespace services-636
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-636 to expose endpoints map[pod1:[80]]
Jan 28 22:51:55.947: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.128263455s elapsed, will retry)
Jan 28 22:52:01.015: INFO: successfully validated that service endpoint-test2 in namespace services-636 exposes endpoints map[pod1:[80]] (9.196346063s elapsed)
STEP: Creating pod pod2 in namespace services-636
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-636 to expose endpoints map[pod1:[80] pod2:[80]]
Jan 28 22:52:05.510: INFO: Unexpected endpoints: found map[40e8e6b2-b789-4ab6-926e-ab5f695f0acd:[80]], expected map[pod1:[80] pod2:[80]] (4.488817693s elapsed, will retry)
Jan 28 22:52:07.544: INFO: successfully validated that service endpoint-test2 in namespace services-636 exposes endpoints map[pod1:[80] pod2:[80]] (6.522499876s elapsed)
STEP: Deleting pod pod1 in namespace services-636
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-636 to expose endpoints map[pod2:[80]]
Jan 28 22:52:07.630: INFO: successfully validated that service endpoint-test2 in namespace services-636 exposes endpoints map[pod2:[80]] (76.839939ms elapsed)
STEP: Deleting pod pod2 in namespace services-636
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-636 to expose endpoints map[]
Jan 28 22:52:08.654: INFO: successfully validated that service endpoint-test2 in namespace services-636 exposes endpoints map[] (1.015389518s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:52:08.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-636" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:17.086 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":278,"completed":252,"skipped":4128,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:52:08.723: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Jan 28 22:52:08.849: INFO: Waiting up to 5m0s for pod "downward-api-f6444bfe-3bd2-4da1-958c-b18563d6ba3a" in namespace "downward-api-6692" to be "success or failure"
Jan 28 22:52:08.866: INFO: Pod "downward-api-f6444bfe-3bd2-4da1-958c-b18563d6ba3a": Phase="Pending", Reason="", readiness=false. Elapsed: 17.273893ms
Jan 28 22:52:10.945: INFO: Pod "downward-api-f6444bfe-3bd2-4da1-958c-b18563d6ba3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096115273s
Jan 28 22:52:13.022: INFO: Pod "downward-api-f6444bfe-3bd2-4da1-958c-b18563d6ba3a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.173062891s
Jan 28 22:52:15.028: INFO: Pod "downward-api-f6444bfe-3bd2-4da1-958c-b18563d6ba3a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.179460228s
Jan 28 22:52:17.036: INFO: Pod "downward-api-f6444bfe-3bd2-4da1-958c-b18563d6ba3a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.187211968s
Jan 28 22:52:19.051: INFO: Pod "downward-api-f6444bfe-3bd2-4da1-958c-b18563d6ba3a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.201542273s
STEP: Saw pod success
Jan 28 22:52:19.051: INFO: Pod "downward-api-f6444bfe-3bd2-4da1-958c-b18563d6ba3a" satisfied condition "success or failure"
Jan 28 22:52:19.054: INFO: Trying to get logs from node jerma-node pod downward-api-f6444bfe-3bd2-4da1-958c-b18563d6ba3a container dapi-container: 
STEP: delete the pod
Jan 28 22:52:19.130: INFO: Waiting for pod downward-api-f6444bfe-3bd2-4da1-958c-b18563d6ba3a to disappear
Jan 28 22:52:19.149: INFO: Pod downward-api-f6444bfe-3bd2-4da1-958c-b18563d6ba3a no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:52:19.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6692" for this suite.

• [SLOW TEST:10.443 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":253,"skipped":4143,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:52:19.166: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 28 22:52:19.360: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3a217649-bec7-43cc-b0d6-f02331efea3c" in namespace "projected-9152" to be "success or failure"
Jan 28 22:52:19.396: INFO: Pod "downwardapi-volume-3a217649-bec7-43cc-b0d6-f02331efea3c": Phase="Pending", Reason="", readiness=false. Elapsed: 35.580668ms
Jan 28 22:52:21.404: INFO: Pod "downwardapi-volume-3a217649-bec7-43cc-b0d6-f02331efea3c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043568269s
Jan 28 22:52:23.417: INFO: Pod "downwardapi-volume-3a217649-bec7-43cc-b0d6-f02331efea3c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056463264s
Jan 28 22:52:25.425: INFO: Pod "downwardapi-volume-3a217649-bec7-43cc-b0d6-f02331efea3c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.065262585s
Jan 28 22:52:27.431: INFO: Pod "downwardapi-volume-3a217649-bec7-43cc-b0d6-f02331efea3c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.071115325s
Jan 28 22:52:29.438: INFO: Pod "downwardapi-volume-3a217649-bec7-43cc-b0d6-f02331efea3c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.07825527s
STEP: Saw pod success
Jan 28 22:52:29.438: INFO: Pod "downwardapi-volume-3a217649-bec7-43cc-b0d6-f02331efea3c" satisfied condition "success or failure"
Jan 28 22:52:29.445: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-3a217649-bec7-43cc-b0d6-f02331efea3c container client-container: 
STEP: delete the pod
Jan 28 22:52:29.498: INFO: Waiting for pod downwardapi-volume-3a217649-bec7-43cc-b0d6-f02331efea3c to disappear
Jan 28 22:52:29.521: INFO: Pod downwardapi-volume-3a217649-bec7-43cc-b0d6-f02331efea3c no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:52:29.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9152" for this suite.

• [SLOW TEST:10.409 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":254,"skipped":4161,"failed":0}
SSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:52:29.577: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-9edb39c4-4464-43fb-ae0c-e38b66421fa6
STEP: Creating a pod to test consume secrets
Jan 28 22:52:29.671: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a882a089-2e07-49bd-9895-2135f9fa977a" in namespace "projected-6098" to be "success or failure"
Jan 28 22:52:29.720: INFO: Pod "pod-projected-secrets-a882a089-2e07-49bd-9895-2135f9fa977a": Phase="Pending", Reason="", readiness=false. Elapsed: 49.001902ms
Jan 28 22:52:31.727: INFO: Pod "pod-projected-secrets-a882a089-2e07-49bd-9895-2135f9fa977a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055219163s
Jan 28 22:52:33.754: INFO: Pod "pod-projected-secrets-a882a089-2e07-49bd-9895-2135f9fa977a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.08241397s
Jan 28 22:52:35.761: INFO: Pod "pod-projected-secrets-a882a089-2e07-49bd-9895-2135f9fa977a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.090051836s
Jan 28 22:52:37.784: INFO: Pod "pod-projected-secrets-a882a089-2e07-49bd-9895-2135f9fa977a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.112562036s
STEP: Saw pod success
Jan 28 22:52:37.784: INFO: Pod "pod-projected-secrets-a882a089-2e07-49bd-9895-2135f9fa977a" satisfied condition "success or failure"
Jan 28 22:52:37.814: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-a882a089-2e07-49bd-9895-2135f9fa977a container projected-secret-volume-test: 
STEP: delete the pod
Jan 28 22:52:38.043: INFO: Waiting for pod pod-projected-secrets-a882a089-2e07-49bd-9895-2135f9fa977a to disappear
Jan 28 22:52:38.071: INFO: Pod pod-projected-secrets-a882a089-2e07-49bd-9895-2135f9fa977a no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:52:38.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6098" for this suite.

• [SLOW TEST:8.502 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":255,"skipped":4166,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:52:38.080: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan 28 22:52:52.353: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 28 22:52:52.362: INFO: Pod pod-with-poststart-http-hook still exists
Jan 28 22:52:54.362: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 28 22:52:54.367: INFO: Pod pod-with-poststart-http-hook still exists
Jan 28 22:52:56.363: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 28 22:52:56.372: INFO: Pod pod-with-poststart-http-hook still exists
Jan 28 22:52:58.363: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 28 22:52:58.371: INFO: Pod pod-with-poststart-http-hook still exists
Jan 28 22:53:00.363: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 28 22:53:00.381: INFO: Pod pod-with-poststart-http-hook still exists
Jan 28 22:53:02.363: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 28 22:53:02.373: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:53:02.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-6781" for this suite.

• [SLOW TEST:24.304 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":256,"skipped":4202,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:53:02.385: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Jan 28 22:53:02.514: INFO: Waiting up to 5m0s for pod "downward-api-c633d39d-4648-41ca-9642-e251ffb36f16" in namespace "downward-api-3416" to be "success or failure"
Jan 28 22:53:02.526: INFO: Pod "downward-api-c633d39d-4648-41ca-9642-e251ffb36f16": Phase="Pending", Reason="", readiness=false. Elapsed: 11.560457ms
Jan 28 22:53:04.543: INFO: Pod "downward-api-c633d39d-4648-41ca-9642-e251ffb36f16": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028067221s
Jan 28 22:53:06.560: INFO: Pod "downward-api-c633d39d-4648-41ca-9642-e251ffb36f16": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045595627s
Jan 28 22:53:08.592: INFO: Pod "downward-api-c633d39d-4648-41ca-9642-e251ffb36f16": Phase="Pending", Reason="", readiness=false. Elapsed: 6.077258244s
Jan 28 22:53:10.604: INFO: Pod "downward-api-c633d39d-4648-41ca-9642-e251ffb36f16": Phase="Pending", Reason="", readiness=false. Elapsed: 8.089135808s
Jan 28 22:53:12.612: INFO: Pod "downward-api-c633d39d-4648-41ca-9642-e251ffb36f16": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.097085428s
STEP: Saw pod success
Jan 28 22:53:12.612: INFO: Pod "downward-api-c633d39d-4648-41ca-9642-e251ffb36f16" satisfied condition "success or failure"
Jan 28 22:53:12.615: INFO: Trying to get logs from node jerma-node pod downward-api-c633d39d-4648-41ca-9642-e251ffb36f16 container dapi-container: 
STEP: delete the pod
Jan 28 22:53:12.794: INFO: Waiting for pod downward-api-c633d39d-4648-41ca-9642-e251ffb36f16 to disappear
Jan 28 22:53:12.807: INFO: Pod downward-api-c633d39d-4648-41ca-9642-e251ffb36f16 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:53:12.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3416" for this suite.

• [SLOW TEST:10.519 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":257,"skipped":4221,"failed":0}
SS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:53:12.906: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 28 22:53:13.072: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f8da3581-8b92-4e14-8a79-44919a6a7e00" in namespace "downward-api-5858" to be "success or failure"
Jan 28 22:53:13.111: INFO: Pod "downwardapi-volume-f8da3581-8b92-4e14-8a79-44919a6a7e00": Phase="Pending", Reason="", readiness=false. Elapsed: 38.348722ms
Jan 28 22:53:15.117: INFO: Pod "downwardapi-volume-f8da3581-8b92-4e14-8a79-44919a6a7e00": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044916141s
Jan 28 22:53:17.127: INFO: Pod "downwardapi-volume-f8da3581-8b92-4e14-8a79-44919a6a7e00": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054670921s
Jan 28 22:53:19.139: INFO: Pod "downwardapi-volume-f8da3581-8b92-4e14-8a79-44919a6a7e00": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06671514s
Jan 28 22:53:21.146: INFO: Pod "downwardapi-volume-f8da3581-8b92-4e14-8a79-44919a6a7e00": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.073131295s
STEP: Saw pod success
Jan 28 22:53:21.146: INFO: Pod "downwardapi-volume-f8da3581-8b92-4e14-8a79-44919a6a7e00" satisfied condition "success or failure"
Jan 28 22:53:21.149: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-f8da3581-8b92-4e14-8a79-44919a6a7e00 container client-container: 
STEP: delete the pod
Jan 28 22:53:21.305: INFO: Waiting for pod downwardapi-volume-f8da3581-8b92-4e14-8a79-44919a6a7e00 to disappear
Jan 28 22:53:21.317: INFO: Pod downwardapi-volume-f8da3581-8b92-4e14-8a79-44919a6a7e00 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:53:21.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5858" for this suite.

• [SLOW TEST:8.424 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":258,"skipped":4223,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:53:21.332: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name cm-test-opt-del-9c95519e-9bf7-4ef6-a7bd-ba6f6d1f5fc2
STEP: Creating configMap with name cm-test-opt-upd-5835330c-3de5-4659-9f03-d2dfa8bf2114
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-9c95519e-9bf7-4ef6-a7bd-ba6f6d1f5fc2
STEP: Updating configmap cm-test-opt-upd-5835330c-3de5-4659-9f03-d2dfa8bf2114
STEP: Creating configMap with name cm-test-opt-create-36390e19-a23e-4899-b817-97dbcb7c0f05
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:54:52.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5255" for this suite.

• [SLOW TEST:91.553 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":259,"skipped":4243,"failed":0}
SSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:54:52.886: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:329
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the initial replication controller
Jan 28 22:54:52.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9947'
Jan 28 22:54:53.381: INFO: stderr: ""
Jan 28 22:54:53.382: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 28 22:54:53.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9947'
Jan 28 22:54:53.548: INFO: stderr: ""
Jan 28 22:54:53.548: INFO: stdout: "update-demo-nautilus-4w8cz update-demo-nautilus-6vnsg "
Jan 28 22:54:53.548: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4w8cz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9947'
Jan 28 22:54:53.697: INFO: stderr: ""
Jan 28 22:54:53.697: INFO: stdout: ""
Jan 28 22:54:53.697: INFO: update-demo-nautilus-4w8cz is created but not running
Jan 28 22:54:58.697: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9947'
Jan 28 22:54:59.047: INFO: stderr: ""
Jan 28 22:54:59.047: INFO: stdout: "update-demo-nautilus-4w8cz update-demo-nautilus-6vnsg "
Jan 28 22:54:59.047: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4w8cz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9947'
Jan 28 22:54:59.271: INFO: stderr: ""
Jan 28 22:54:59.271: INFO: stdout: ""
Jan 28 22:54:59.272: INFO: update-demo-nautilus-4w8cz is created but not running
Jan 28 22:55:04.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9947'
Jan 28 22:55:04.391: INFO: stderr: ""
Jan 28 22:55:04.391: INFO: stdout: "update-demo-nautilus-4w8cz update-demo-nautilus-6vnsg "
Jan 28 22:55:04.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4w8cz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9947'
Jan 28 22:55:04.519: INFO: stderr: ""
Jan 28 22:55:04.519: INFO: stdout: "true"
Jan 28 22:55:04.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4w8cz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9947'
Jan 28 22:55:04.640: INFO: stderr: ""
Jan 28 22:55:04.640: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 28 22:55:04.640: INFO: validating pod update-demo-nautilus-4w8cz
Jan 28 22:55:04.699: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 28 22:55:04.700: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 28 22:55:04.700: INFO: update-demo-nautilus-4w8cz is verified up and running
Jan 28 22:55:04.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6vnsg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9947'
Jan 28 22:55:04.844: INFO: stderr: ""
Jan 28 22:55:04.844: INFO: stdout: "true"
Jan 28 22:55:04.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6vnsg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9947'
Jan 28 22:55:04.979: INFO: stderr: ""
Jan 28 22:55:04.979: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 28 22:55:04.979: INFO: validating pod update-demo-nautilus-6vnsg
Jan 28 22:55:04.987: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 28 22:55:04.987: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 28 22:55:04.987: INFO: update-demo-nautilus-6vnsg is verified up and running
STEP: rolling-update to new replication controller
Jan 28 22:55:04.990: INFO: scanned /root for discovery docs: 
Jan 28 22:55:04.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-9947'
Jan 28 22:55:34.601: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan 28 22:55:34.602: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 28 22:55:34.603: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9947'
Jan 28 22:55:34.748: INFO: stderr: ""
Jan 28 22:55:34.748: INFO: stdout: "update-demo-kitten-7vr5k update-demo-kitten-v22c8 "
Jan 28 22:55:34.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-7vr5k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9947'
Jan 28 22:55:34.843: INFO: stderr: ""
Jan 28 22:55:34.843: INFO: stdout: "true"
Jan 28 22:55:34.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-7vr5k -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9947'
Jan 28 22:55:34.961: INFO: stderr: ""
Jan 28 22:55:34.961: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan 28 22:55:34.961: INFO: validating pod update-demo-kitten-7vr5k
Jan 28 22:55:34.969: INFO: got data: {
  "image": "kitten.jpg"
}

Jan 28 22:55:34.969: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan 28 22:55:34.969: INFO: update-demo-kitten-7vr5k is verified up and running
Jan 28 22:55:34.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-v22c8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9947'
Jan 28 22:55:35.075: INFO: stderr: ""
Jan 28 22:55:35.075: INFO: stdout: "true"
Jan 28 22:55:35.075: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-v22c8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9947'
Jan 28 22:55:35.174: INFO: stderr: ""
Jan 28 22:55:35.174: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan 28 22:55:35.174: INFO: validating pod update-demo-kitten-v22c8
Jan 28 22:55:35.180: INFO: got data: {
  "image": "kitten.jpg"
}

Jan 28 22:55:35.181: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan 28 22:55:35.181: INFO: update-demo-kitten-v22c8 is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:55:35.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9947" for this suite.

• [SLOW TEST:42.302 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:327
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller  [Conformance]","total":278,"completed":260,"skipped":4248,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:55:35.188: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-77fc2a7f-1fcf-471d-bb74-6971ec39f63a
STEP: Creating a pod to test consume configMaps
Jan 28 22:55:35.351: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e8167857-cb31-437d-8357-0f8bad680e9c" in namespace "projected-4362" to be "success or failure"
Jan 28 22:55:35.367: INFO: Pod "pod-projected-configmaps-e8167857-cb31-437d-8357-0f8bad680e9c": Phase="Pending", Reason="", readiness=false. Elapsed: 16.612773ms
Jan 28 22:55:37.377: INFO: Pod "pod-projected-configmaps-e8167857-cb31-437d-8357-0f8bad680e9c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025980947s
Jan 28 22:55:39.384: INFO: Pod "pod-projected-configmaps-e8167857-cb31-437d-8357-0f8bad680e9c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033146478s
Jan 28 22:55:42.274: INFO: Pod "pod-projected-configmaps-e8167857-cb31-437d-8357-0f8bad680e9c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.923618727s
Jan 28 22:55:44.866: INFO: Pod "pod-projected-configmaps-e8167857-cb31-437d-8357-0f8bad680e9c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.514864992s
STEP: Saw pod success
Jan 28 22:55:44.866: INFO: Pod "pod-projected-configmaps-e8167857-cb31-437d-8357-0f8bad680e9c" satisfied condition "success or failure"
Jan 28 22:55:44.877: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-e8167857-cb31-437d-8357-0f8bad680e9c container projected-configmap-volume-test: 
STEP: delete the pod
Jan 28 22:55:45.321: INFO: Waiting for pod pod-projected-configmaps-e8167857-cb31-437d-8357-0f8bad680e9c to disappear
Jan 28 22:55:45.504: INFO: Pod pod-projected-configmaps-e8167857-cb31-437d-8357-0f8bad680e9c no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:55:45.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4362" for this suite.

• [SLOW TEST:10.346 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":261,"skipped":4263,"failed":0}
SSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:55:45.535: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:56:38.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-2600" for this suite.

• [SLOW TEST:53.398 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":262,"skipped":4270,"failed":0}
S
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:56:38.934: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name cm-test-opt-del-e605ad3b-a8c8-4db2-ab53-10a54f7a8926
STEP: Creating configMap with name cm-test-opt-upd-e903c14f-31a9-4d14-a6f5-87b57554f592
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-e605ad3b-a8c8-4db2-ab53-10a54f7a8926
STEP: Updating configmap cm-test-opt-upd-e903c14f-31a9-4d14-a6f5-87b57554f592
STEP: Creating configMap with name cm-test-opt-create-dc14c880-63ab-4d28-bd39-26aa3640dfd1
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:58:14.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7023" for this suite.

• [SLOW TEST:95.403 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":263,"skipped":4271,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:58:14.338: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override arguments
Jan 28 22:58:14.455: INFO: Waiting up to 5m0s for pod "client-containers-ed6f87f1-8bec-40d0-a213-cf245aba585e" in namespace "containers-4561" to be "success or failure"
Jan 28 22:58:14.473: INFO: Pod "client-containers-ed6f87f1-8bec-40d0-a213-cf245aba585e": Phase="Pending", Reason="", readiness=false. Elapsed: 18.672266ms
Jan 28 22:58:16.484: INFO: Pod "client-containers-ed6f87f1-8bec-40d0-a213-cf245aba585e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029584386s
Jan 28 22:58:18.496: INFO: Pod "client-containers-ed6f87f1-8bec-40d0-a213-cf245aba585e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040900973s
Jan 28 22:58:20.505: INFO: Pod "client-containers-ed6f87f1-8bec-40d0-a213-cf245aba585e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050372681s
Jan 28 22:58:22.519: INFO: Pod "client-containers-ed6f87f1-8bec-40d0-a213-cf245aba585e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.063837997s
Jan 28 22:58:24.578: INFO: Pod "client-containers-ed6f87f1-8bec-40d0-a213-cf245aba585e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.123709773s
STEP: Saw pod success
Jan 28 22:58:24.579: INFO: Pod "client-containers-ed6f87f1-8bec-40d0-a213-cf245aba585e" satisfied condition "success or failure"
Jan 28 22:58:24.585: INFO: Trying to get logs from node jerma-node pod client-containers-ed6f87f1-8bec-40d0-a213-cf245aba585e container test-container: 
STEP: delete the pod
Jan 28 22:58:24.622: INFO: Waiting for pod client-containers-ed6f87f1-8bec-40d0-a213-cf245aba585e to disappear
Jan 28 22:58:24.661: INFO: Pod client-containers-ed6f87f1-8bec-40d0-a213-cf245aba585e no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:58:24.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-4561" for this suite.

• [SLOW TEST:10.337 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":264,"skipped":4288,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:58:24.676: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: validating api versions
Jan 28 22:58:24.786: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Jan 28 22:58:25.069: INFO: stderr: ""
Jan 28 22:58:25.069: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:58:25.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4882" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":278,"completed":265,"skipped":4310,"failed":0}
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:58:25.084: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir volume type on tmpfs
Jan 28 22:58:25.206: INFO: Waiting up to 5m0s for pod "pod-0435f171-5425-4202-a69b-1fe39e07f08b" in namespace "emptydir-2011" to be "success or failure"
Jan 28 22:58:25.225: INFO: Pod "pod-0435f171-5425-4202-a69b-1fe39e07f08b": Phase="Pending", Reason="", readiness=false. Elapsed: 18.178281ms
Jan 28 22:58:27.354: INFO: Pod "pod-0435f171-5425-4202-a69b-1fe39e07f08b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.147256181s
Jan 28 22:58:29.364: INFO: Pod "pod-0435f171-5425-4202-a69b-1fe39e07f08b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.157180185s
Jan 28 22:58:31.373: INFO: Pod "pod-0435f171-5425-4202-a69b-1fe39e07f08b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.166950988s
Jan 28 22:58:33.383: INFO: Pod "pod-0435f171-5425-4202-a69b-1fe39e07f08b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.17624339s
STEP: Saw pod success
Jan 28 22:58:33.383: INFO: Pod "pod-0435f171-5425-4202-a69b-1fe39e07f08b" satisfied condition "success or failure"
Jan 28 22:58:33.388: INFO: Trying to get logs from node jerma-node pod pod-0435f171-5425-4202-a69b-1fe39e07f08b container test-container: 
STEP: delete the pod
Jan 28 22:58:33.461: INFO: Waiting for pod pod-0435f171-5425-4202-a69b-1fe39e07f08b to disappear
Jan 28 22:58:33.478: INFO: Pod pod-0435f171-5425-4202-a69b-1fe39e07f08b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:58:33.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2011" for this suite.

• [SLOW TEST:8.417 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":266,"skipped":4313,"failed":0}
SSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:58:33.501: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-605.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-605.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 28 22:58:45.915: INFO: DNS probes using dns-605/dns-test-c834c7d6-819f-47c3-bde3-3372dea1a7fb succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:58:45.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-605" for this suite.

• [SLOW TEST:12.469 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":278,"completed":267,"skipped":4318,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:58:45.973: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation
Jan 28 22:58:46.064: INFO: >>> kubeConfig: /root/.kube/config
Jan 28 22:58:49.598: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:59:01.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-4468" for this suite.

• [SLOW TEST:15.306 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":268,"skipped":4341,"failed":0}
S
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:59:01.279: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Jan 28 22:59:07.741: INFO: 10 pods remaining
Jan 28 22:59:07.741: INFO: 10 pods has nil DeletionTimestamp
Jan 28 22:59:07.741: INFO: 
Jan 28 22:59:08.772: INFO: 10 pods remaining
Jan 28 22:59:08.773: INFO: 10 pods has nil DeletionTimestamp
Jan 28 22:59:08.773: INFO: 
Jan 28 22:59:09.599: INFO: 10 pods remaining
Jan 28 22:59:09.599: INFO: 10 pods has nil DeletionTimestamp
Jan 28 22:59:09.599: INFO: 
Jan 28 22:59:12.581: INFO: 10 pods remaining
Jan 28 22:59:12.581: INFO: 10 pods has nil DeletionTimestamp
Jan 28 22:59:12.581: INFO: 
Jan 28 22:59:14.494: INFO: 10 pods remaining
Jan 28 22:59:14.494: INFO: 10 pods has nil DeletionTimestamp
Jan 28 22:59:14.494: INFO: 
Jan 28 22:59:17.339: INFO: 10 pods remaining
Jan 28 22:59:17.339: INFO: 10 pods has nil DeletionTimestamp
Jan 28 22:59:17.339: INFO: 
Jan 28 22:59:19.864: INFO: 10 pods remaining
Jan 28 22:59:19.865: INFO: 10 pods has nil DeletionTimestamp
Jan 28 22:59:19.865: INFO: 
Jan 28 22:59:21.576: INFO: 10 pods remaining
Jan 28 22:59:21.576: INFO: 10 pods has nil DeletionTimestamp
Jan 28 22:59:21.576: INFO: 
Jan 28 22:59:23.345: INFO: 10 pods remaining
Jan 28 22:59:23.345: INFO: 10 pods has nil DeletionTimestamp
Jan 28 22:59:23.345: INFO: 
Jan 28 22:59:24.858: INFO: 10 pods remaining
Jan 28 22:59:24.858: INFO: 10 pods has nil DeletionTimestamp
Jan 28 22:59:24.858: INFO: 
Jan 28 22:59:25.825: INFO: 10 pods remaining
Jan 28 22:59:25.825: INFO: 10 pods has nil DeletionTimestamp
Jan 28 22:59:25.825: INFO: 
Jan 28 22:59:26.638: INFO: 10 pods remaining
Jan 28 22:59:26.639: INFO: 10 pods has nil DeletionTimestamp
Jan 28 22:59:26.639: INFO: 
Jan 28 22:59:27.602: INFO: 10 pods remaining
Jan 28 22:59:27.602: INFO: 10 pods has nil DeletionTimestamp
Jan 28 22:59:27.602: INFO: 
Jan 28 22:59:28.602: INFO: 10 pods remaining
Jan 28 22:59:28.602: INFO: 10 pods has nil DeletionTimestamp
Jan 28 22:59:28.602: INFO: 
STEP: Gathering metrics
W0128 22:59:29.607518       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 28 22:59:29.608: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 22:59:29.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1554" for this suite.

• [SLOW TEST:28.348 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":269,"skipped":4342,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 22:59:29.629: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-8733, will wait for the garbage collector to delete the pods
Jan 28 22:59:51.598: INFO: Deleting Job.batch foo took: 9.194325ms
Jan 28 22:59:51.899: INFO: Terminating Job.batch foo pods took: 300.498522ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 23:00:32.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-8733" for this suite.

• [SLOW TEST:62.801 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":270,"skipped":4362,"failed":0}
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 23:00:32.431: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 28 23:00:32.598: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b65102ba-91c3-45f7-a2b4-522ed70b099f" in namespace "projected-3437" to be "success or failure"
Jan 28 23:00:32.645: INFO: Pod "downwardapi-volume-b65102ba-91c3-45f7-a2b4-522ed70b099f": Phase="Pending", Reason="", readiness=false. Elapsed: 47.087234ms
Jan 28 23:00:34.655: INFO: Pod "downwardapi-volume-b65102ba-91c3-45f7-a2b4-522ed70b099f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057136507s
Jan 28 23:00:36.661: INFO: Pod "downwardapi-volume-b65102ba-91c3-45f7-a2b4-522ed70b099f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063322427s
Jan 28 23:00:38.668: INFO: Pod "downwardapi-volume-b65102ba-91c3-45f7-a2b4-522ed70b099f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.069704793s
Jan 28 23:00:40.674: INFO: Pod "downwardapi-volume-b65102ba-91c3-45f7-a2b4-522ed70b099f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.076163499s
STEP: Saw pod success
Jan 28 23:00:40.674: INFO: Pod "downwardapi-volume-b65102ba-91c3-45f7-a2b4-522ed70b099f" satisfied condition "success or failure"
Jan 28 23:00:40.678: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-b65102ba-91c3-45f7-a2b4-522ed70b099f container client-container: 
STEP: delete the pod
Jan 28 23:00:40.735: INFO: Waiting for pod downwardapi-volume-b65102ba-91c3-45f7-a2b4-522ed70b099f to disappear
Jan 28 23:00:40.740: INFO: Pod downwardapi-volume-b65102ba-91c3-45f7-a2b4-522ed70b099f no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 23:00:40.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3437" for this suite.

• [SLOW TEST:8.322 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":271,"skipped":4365,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  should include custom resource definition resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 23:00:40.754: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] should include custom resource definition resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: fetching the /apis discovery document
STEP: finding the apiextensions.k8s.io API group in the /apis discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/apiextensions.k8s.io discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document
STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document
STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 23:00:40.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-7655" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":272,"skipped":4371,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 23:00:40.879: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan 28 23:00:41.433: INFO: Waiting up to 5m0s for pod "pod-c534d42a-b447-426a-8e05-31e2d470b749" in namespace "emptydir-2699" to be "success or failure"
Jan 28 23:00:41.457: INFO: Pod "pod-c534d42a-b447-426a-8e05-31e2d470b749": Phase="Pending", Reason="", readiness=false. Elapsed: 24.238227ms
Jan 28 23:00:43.467: INFO: Pod "pod-c534d42a-b447-426a-8e05-31e2d470b749": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033588165s
Jan 28 23:00:45.473: INFO: Pod "pod-c534d42a-b447-426a-8e05-31e2d470b749": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040137592s
Jan 28 23:00:47.480: INFO: Pod "pod-c534d42a-b447-426a-8e05-31e2d470b749": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046908997s
Jan 28 23:00:49.491: INFO: Pod "pod-c534d42a-b447-426a-8e05-31e2d470b749": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.057380285s
STEP: Saw pod success
Jan 28 23:00:49.491: INFO: Pod "pod-c534d42a-b447-426a-8e05-31e2d470b749" satisfied condition "success or failure"
Jan 28 23:00:49.497: INFO: Trying to get logs from node jerma-node pod pod-c534d42a-b447-426a-8e05-31e2d470b749 container test-container: 
STEP: delete the pod
Jan 28 23:00:49.557: INFO: Waiting for pod pod-c534d42a-b447-426a-8e05-31e2d470b749 to disappear
Jan 28 23:00:49.587: INFO: Pod pod-c534d42a-b447-426a-8e05-31e2d470b749 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 23:00:49.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2699" for this suite.

• [SLOW TEST:8.723 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":273,"skipped":4406,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 23:00:49.603: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-map-16a94be6-f843-4459-977e-91fe04499c43
STEP: Creating a pod to test consume configMaps
Jan 28 23:00:49.715: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-fe0ccb5f-af89-4312-8fd8-4325c4b4cc23" in namespace "projected-913" to be "success or failure"
Jan 28 23:00:49.731: INFO: Pod "pod-projected-configmaps-fe0ccb5f-af89-4312-8fd8-4325c4b4cc23": Phase="Pending", Reason="", readiness=false. Elapsed: 15.000585ms
Jan 28 23:00:51.740: INFO: Pod "pod-projected-configmaps-fe0ccb5f-af89-4312-8fd8-4325c4b4cc23": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024296296s
Jan 28 23:00:53.750: INFO: Pod "pod-projected-configmaps-fe0ccb5f-af89-4312-8fd8-4325c4b4cc23": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034527669s
Jan 28 23:00:55.761: INFO: Pod "pod-projected-configmaps-fe0ccb5f-af89-4312-8fd8-4325c4b4cc23": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045550539s
Jan 28 23:00:57.777: INFO: Pod "pod-projected-configmaps-fe0ccb5f-af89-4312-8fd8-4325c4b4cc23": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.061293539s
STEP: Saw pod success
Jan 28 23:00:57.777: INFO: Pod "pod-projected-configmaps-fe0ccb5f-af89-4312-8fd8-4325c4b4cc23" satisfied condition "success or failure"
Jan 28 23:00:57.782: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-fe0ccb5f-af89-4312-8fd8-4325c4b4cc23 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 28 23:00:57.824: INFO: Waiting for pod pod-projected-configmaps-fe0ccb5f-af89-4312-8fd8-4325c4b4cc23 to disappear
Jan 28 23:00:57.846: INFO: Pod pod-projected-configmaps-fe0ccb5f-af89-4312-8fd8-4325c4b4cc23 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 23:00:57.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-913" for this suite.

• [SLOW TEST:8.309 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":274,"skipped":4430,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 23:00:57.913: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: set up a multi version CRD
Jan 28 23:00:58.001: INFO: >>> kubeConfig: /root/.kube/config
STEP: rename a version
STEP: check the new version name is served
STEP: check the old version name is removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 23:01:17.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-4216" for this suite.

• [SLOW TEST:19.578 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":275,"skipped":4454,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 23:01:17.491: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 28 23:01:18.636: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Jan 28 23:01:18.739: INFO: Number of nodes with available pods: 0
Jan 28 23:01:18.739: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Jan 28 23:01:18.827: INFO: Number of nodes with available pods: 0
Jan 28 23:01:18.827: INFO: Node jerma-node is running more than one daemon pod
Jan 28 23:01:19.836: INFO: Number of nodes with available pods: 0
Jan 28 23:01:19.837: INFO: Node jerma-node is running more than one daemon pod
Jan 28 23:01:20.841: INFO: Number of nodes with available pods: 0
Jan 28 23:01:20.841: INFO: Node jerma-node is running more than one daemon pod
Jan 28 23:01:21.836: INFO: Number of nodes with available pods: 0
Jan 28 23:01:21.837: INFO: Node jerma-node is running more than one daemon pod
Jan 28 23:01:22.888: INFO: Number of nodes with available pods: 0
Jan 28 23:01:22.889: INFO: Node jerma-node is running more than one daemon pod
Jan 28 23:01:23.837: INFO: Number of nodes with available pods: 0
Jan 28 23:01:23.837: INFO: Node jerma-node is running more than one daemon pod
Jan 28 23:01:24.900: INFO: Number of nodes with available pods: 0
Jan 28 23:01:24.900: INFO: Node jerma-node is running more than one daemon pod
Jan 28 23:01:25.839: INFO: Number of nodes with available pods: 0
Jan 28 23:01:25.839: INFO: Node jerma-node is running more than one daemon pod
Jan 28 23:01:26.834: INFO: Number of nodes with available pods: 1
Jan 28 23:01:26.834: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Jan 28 23:01:26.875: INFO: Number of nodes with available pods: 1
Jan 28 23:01:26.875: INFO: Number of running nodes: 0, number of available pods: 1
Jan 28 23:01:27.884: INFO: Number of nodes with available pods: 0
Jan 28 23:01:27.884: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Jan 28 23:01:27.905: INFO: Number of nodes with available pods: 0
Jan 28 23:01:27.905: INFO: Node jerma-node is running more than one daemon pod
Jan 28 23:01:28.915: INFO: Number of nodes with available pods: 0
Jan 28 23:01:28.915: INFO: Node jerma-node is running more than one daemon pod
Jan 28 23:01:29.913: INFO: Number of nodes with available pods: 0
Jan 28 23:01:29.913: INFO: Node jerma-node is running more than one daemon pod
Jan 28 23:01:30.912: INFO: Number of nodes with available pods: 0
Jan 28 23:01:30.912: INFO: Node jerma-node is running more than one daemon pod
Jan 28 23:01:31.915: INFO: Number of nodes with available pods: 0
Jan 28 23:01:31.915: INFO: Node jerma-node is running more than one daemon pod
Jan 28 23:01:32.914: INFO: Number of nodes with available pods: 0
Jan 28 23:01:32.914: INFO: Node jerma-node is running more than one daemon pod
Jan 28 23:01:33.912: INFO: Number of nodes with available pods: 0
Jan 28 23:01:33.912: INFO: Node jerma-node is running more than one daemon pod
Jan 28 23:01:34.918: INFO: Number of nodes with available pods: 0
Jan 28 23:01:34.918: INFO: Node jerma-node is running more than one daemon pod
Jan 28 23:01:35.915: INFO: Number of nodes with available pods: 0
Jan 28 23:01:35.916: INFO: Node jerma-node is running more than one daemon pod
Jan 28 23:01:36.912: INFO: Number of nodes with available pods: 0
Jan 28 23:01:36.912: INFO: Node jerma-node is running more than one daemon pod
Jan 28 23:01:37.915: INFO: Number of nodes with available pods: 0
Jan 28 23:01:37.915: INFO: Node jerma-node is running more than one daemon pod
Jan 28 23:01:38.914: INFO: Number of nodes with available pods: 0
Jan 28 23:01:38.914: INFO: Node jerma-node is running more than one daemon pod
Jan 28 23:01:39.925: INFO: Number of nodes with available pods: 1
Jan 28 23:01:39.925: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6677, will wait for the garbage collector to delete the pods
Jan 28 23:01:40.029: INFO: Deleting DaemonSet.extensions daemon-set took: 29.733151ms
Jan 28 23:01:40.330: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.568605ms
Jan 28 23:01:52.449: INFO: Number of nodes with available pods: 0
Jan 28 23:01:52.450: INFO: Number of running nodes: 0, number of available pods: 0
Jan 28 23:01:52.475: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6677/daemonsets","resourceVersion":"4985715"},"items":null}

Jan 28 23:01:52.482: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6677/pods","resourceVersion":"4985715"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 23:01:52.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-6677" for this suite.

• [SLOW TEST:35.082 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":276,"skipped":4475,"failed":0}
S
------------------------------
[sig-cli] Kubectl client Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 23:01:52.575: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jan 28 23:01:52.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-9568'
Jan 28 23:01:55.332: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 28 23:01:55.332: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n"
STEP: verifying the rc e2e-test-httpd-rc was created
STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created
STEP: confirm that you can get logs from an rc
Jan 28 23:01:55.402: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-vhql8]
Jan 28 23:01:55.402: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-vhql8" in namespace "kubectl-9568" to be "running and ready"
Jan 28 23:01:55.451: INFO: Pod "e2e-test-httpd-rc-vhql8": Phase="Pending", Reason="", readiness=false. Elapsed: 48.529734ms
Jan 28 23:01:57.459: INFO: Pod "e2e-test-httpd-rc-vhql8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056499412s
Jan 28 23:01:59.466: INFO: Pod "e2e-test-httpd-rc-vhql8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063845693s
Jan 28 23:02:01.510: INFO: Pod "e2e-test-httpd-rc-vhql8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.108082825s
Jan 28 23:02:03.517: INFO: Pod "e2e-test-httpd-rc-vhql8": Phase="Running", Reason="", readiness=true. Elapsed: 8.115275317s
Jan 28 23:02:03.517: INFO: Pod "e2e-test-httpd-rc-vhql8" satisfied condition "running and ready"
Jan 28 23:02:03.517: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-vhql8]
Jan 28 23:02:03.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-9568'
Jan 28 23:02:03.800: INFO: stderr: ""
Jan 28 23:02:03.801: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.44.0.1. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.44.0.1. Set the 'ServerName' directive globally to suppress this message\n[Tue Jan 28 23:02:01.276832 2020] [mpm_event:notice] [pid 1:tid 139795905497960] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Tue Jan 28 23:02:01.276901 2020] [core:notice] [pid 1:tid 139795905497960] AH00094: Command line: 'httpd -D FOREGROUND'\n"
[AfterEach] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617
Jan 28 23:02:03.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-9568'
Jan 28 23:02:04.008: INFO: stderr: ""
Jan 28 23:02:04.009: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 23:02:04.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9568" for this suite.

• [SLOW TEST:11.457 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1608
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image  [Conformance]","total":278,"completed":277,"skipped":4476,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 28 23:02:04.038: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-map-4c6e0fc3-b1d8-4fd4-a3a4-ee7fdf84b7cc
STEP: Creating a pod to test consume secrets
Jan 28 23:02:04.183: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e12ddccc-c4bb-44d9-9581-5f0b998b0802" in namespace "projected-3844" to be "success or failure"
Jan 28 23:02:04.198: INFO: Pod "pod-projected-secrets-e12ddccc-c4bb-44d9-9581-5f0b998b0802": Phase="Pending", Reason="", readiness=false. Elapsed: 15.267813ms
Jan 28 23:02:06.206: INFO: Pod "pod-projected-secrets-e12ddccc-c4bb-44d9-9581-5f0b998b0802": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02265117s
Jan 28 23:02:08.216: INFO: Pod "pod-projected-secrets-e12ddccc-c4bb-44d9-9581-5f0b998b0802": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032766077s
Jan 28 23:02:10.226: INFO: Pod "pod-projected-secrets-e12ddccc-c4bb-44d9-9581-5f0b998b0802": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043435003s
Jan 28 23:02:12.233: INFO: Pod "pod-projected-secrets-e12ddccc-c4bb-44d9-9581-5f0b998b0802": Phase="Pending", Reason="", readiness=false. Elapsed: 8.049903506s
Jan 28 23:02:14.242: INFO: Pod "pod-projected-secrets-e12ddccc-c4bb-44d9-9581-5f0b998b0802": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.059014292s
STEP: Saw pod success
Jan 28 23:02:14.242: INFO: Pod "pod-projected-secrets-e12ddccc-c4bb-44d9-9581-5f0b998b0802" satisfied condition "success or failure"
Jan 28 23:02:14.247: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-e12ddccc-c4bb-44d9-9581-5f0b998b0802 container projected-secret-volume-test: 
STEP: delete the pod
Jan 28 23:02:14.287: INFO: Waiting for pod pod-projected-secrets-e12ddccc-c4bb-44d9-9581-5f0b998b0802 to disappear
Jan 28 23:02:14.325: INFO: Pod pod-projected-secrets-e12ddccc-c4bb-44d9-9581-5f0b998b0802 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 28 23:02:14.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3844" for this suite.

• [SLOW TEST:10.300 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":278,"skipped":4526,"failed":0}
SSSSSSSSSSJan 28 23:02:14.339: INFO: Running AfterSuite actions on all nodes
Jan 28 23:02:14.339: INFO: Running AfterSuite actions on node 1
Jan 28 23:02:14.339: INFO: Skipping dumping logs from cluster
{"msg":"Test Suite completed","total":278,"completed":278,"skipped":4536,"failed":0}

Ran 278 of 4814 Specs in 6799.635 seconds
SUCCESS! -- 278 Passed | 0 Failed | 0 Pending | 4536 Skipped
PASS